UK Parliament / Open data

Data Protection Bill [HL]

My Lords, in moving Amendment 74, I will also speak to Amendments 74A, 75, 77, 119, 133A, 134 and 183—I think I have encompassed them all; at least I hope I have. In a way this is an extension of the very interesting debate that we heard on Amendment 71A, but further down the pipeline, so to speak. This group contains a range of possible and desirable changes to the Bill relating to artificial intelligence and the use of algorithms.

Data has been described, not wholly accurately, as the oil of artificial intelligence. With the advent of AI and its active application to datasets, it is vital that we strike the right balance in protecting privacy and the use of personal data. Indeed, the Minister spoke about that balance in that debate. Above all, we need to be increasingly aware of unintended discrimination where an element of a decision involves an algorithm. If a particular system learns from a dataset that contains biases, such as associating female names with family roles and male names with careers, it is likely to reproduce them in its decisions. One way of helping to identify and rectify bias is to ensure that such algorithms are transparent, so that it is possible to see not only what data is being used but the steps being taken to process that data in coming to a particular conclusion.

In all this, there is the major risk that we do not challenge computer-aided decision-making. To some extent, this is recognised by article 22 of the GDPR, which at least gives the right of explanation where there is fully automated decision-taking, and it is true that in certain respects, Clause 13 amplifies article 22. For instance, article 22 does not state what safeguards need to be in place; it talks just about proper safeguards. In the Bill, it is proposed that, after a decision has been made, the individual has to be informed of the

outcome, which is better than what the GDPR currently offers. It also states that data subjects should have the right to ask that the decision be reconsidered or that the decision not be made by an algorithm. There is also the requirement, in certain circumstances, for companies and public bodies to undertake data protection impact assessment under Clause 62. There are also new provisions in the GDPR for codes of conduct and certification, so that if an industry is moving forward on artificial intelligence in an application, the ICO can certify the approach that the industry is taking on fairness in automated decision-taking.

6.30 pm

However, the automated decision safeguards in the GDPR place too much emphasis on the requirement for a decision to be fully automated and significant before they apply. Few decisions are fully automated. Should there not also been the right to an explanation of systems where AI is only one part of the final decision in certain key circumstances—for instance, where policing, justice, health or personal welfare or finance is concerned? This could be an explanation in advance of the AI or algorithm being used—transparency by design—or, if the decision-making process is not known in advance, an obligation to test the AI’s performance in the same circumstances.

The automated decision safeguards in the GDPR should be amended explicitly to protect individuals against unfair and non-transparent semiautonomous AI systems that they may face in their day-to-day lives. For example, provision in the recent Digital Republic Act in France treats semiautonomous algorithms as requiring explanation.

To really ingratiate myself with the Minister—I may not succeed, but it is worth a try—I shall quote from a speech by Matt Hancock to the Leverhulme centre last July. He said that,

“we need to identify and understand the ethical and governance challenges posed by uses of data, now and into the future, where they go beyond current regulation, and then determine how best to identify appropriate rules … establish new norms, and where necessary regulations ... Unfair discrimination will still be unfair. Using AI to make some decisions may make those decisions more difficult to unpack. But it won’t make fairness less important”.

That is a very important paragraph in that speech to the Leverhulme centre.

On Amendments 74, 77 and 136, clarification is needed, as it is unclear whether the UK will consider the article 29 working party opinions after we leave the European Union, despite the central role of the ICO in crafting them. This is particularly relevant as the recently published draft guidelines on profiling by the article 29 working party state:

“The controller cannot avoid the Article 22 provisions by fabricating human involvement. For example, if someone routinely applies automatically generated profiles to individuals without any actual influence on the result, this would still be a decision based solely on automated processing.

To qualify as human intervention, the controller must ensure that any oversight of the decision is meaningful, rather than just a token gesture. It should be carried out by someone who has the authority and competence to change the decision. As part of the analysis, they should consider all the available input and output data”.

For the purpose of clarity of obligations imposed on controllers, it is important that this explanation is included in the Bill.

The effect of Amendment 77 would be that:

“A decision is a ‘significant decision’ for the purposes of this section if, in

relation to a data subject, it—

(a) produces legal effects concerning the data subject, or

(b) significantly affects the data subject”—

or,

“a group sharing a protected characteristic, within the meaning of the Equality Act 2010, to which the data subject belongs”.

Take the example of an advertisement targeting people based on race. An example of this was discovered by a black Harvard computer science professor, Latanya Sweeney, who investigated why, “Have you been to prison and need legal help” adverts appeared online when googling “black-sounding” names rather than “white-sounding” names. Did the decision affect her? Unlikely: she is, as are many investigators, in a privileged position. But the amendment allows for people to take action on discriminatory systems even when they themselves might not be significantly affected at an individual level. This would usefully increase protection and explicitly define that a “significant” effect can be significant to a group of which an individual is part. This is similarly acknowledged by the recent article 29 working party guidance which states:

“Processing that might have little impact on individuals generally may in fact have a significant effect on certain groups of society, such as minority groups or vulnerable adults”.

Amendment 75 would clarify that the exemption from prohibition on taking significant decisions based solely on automated processing must not apply to purely automated decisions that engage an individual’s human rights. In relation to general automated processing in Clause 13, the explicit protection of human rights would protect individuals from being subjected to automated decisions that could engage their fundamental rights: for example, by unfairly discriminating against them. A recent study claimed that a facial recognition tool was able to detect individuals’ sexuality based on their photographs taken from online dating sites with greater accuracy than humans. Another recent study claimed that a machine learning tool was able to diagnose depression by scanning individuals’ photographs posted on the social media platform Instagram with greater accuracy than the average doctor.

The rapidly growing field of machine learning and algorithmic decision-making clearly presents new risks. As a minimum, individuals’ basic rights must be explicitly protected at all times and regarded as paramount.

On Amendment 183, personal data is of course already defined as data relating to a data subject which makes him or her identified or identifiable. The administrative decision then becomes one concerning him or her. This is a clarification of what “meaningful information” involves. There is evidence from both the article 29 committee and the ICO’s consultation that some tweaking of “solely” appears compatible with the GDPR. Under the Equality Act 2010, there is a public sector equality duty, and public agencies have an obligation to ensure that their actions, including the generation, purchase or use of machine learning models, do not have discriminatory effects.

On Amendment 119, where automated decisions are made under article 22, data subjects are permitted minimum safeguards. Recital 71 and paragraph 115 of the Government’s own Explanatory Notes suggest that this includes a right to explanation. However, as UK law has not traditionally used recitals—we heard previously that they do not form part of the Bill—it is unclear how they will be interpreted after they are migrated into domestic law as a potential consequence of the European Union (Withdrawal) Bill. This provision should be included in the main text of the Bill. Without passing an amendment, the Explanatory Notes would be incorrect in communicating the effect of the Bill.

I turn finally to Amendments 74A and 133A. These amendments are derived from a reading of recital 71, and the amendments themselves might be somewhat defective because they might give the impression that any safeguards are being deleted where children are involved. However, recital 71, when it applies to article 22, states that such measures should not concern a child. As I read that—a Minister may be able to clarify—the provisions related to automated decision-taking should not be allowable in connection with children. That requires clarification. In particular, not only is that rider to recital 71 in the recitals, there is a further recital in the GDPR, recital 38, which states:

“Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data”.

That all adds up to quite a number of areas in Clause 13 which have either not been properly transposed from article 22, or by some tweaking and clarification of definitions could vastly improve Clause 13. I beg to move, and I look forward to the Minister’s reply.

About this proceeding contribution

Reference

785 cc1862-5 

Session

2017-19

Chamber / Committee

House of Lords chamber
Back to top