My Lords, I speak to Amendment 75 in particular, but the whole issue of automated decision-making is extremely worrying.
As we have gone through this Bill, I have been desperately hoping that some of the most repressive bits are a negotiating tactic on the Government’s part, and that before Report they will say, “We’ll take out this really nasty bit if you let us leave in this not really quite so nasty bit”. I feel that this issue is one of the really nasty bits.
I thank Liberty, which has worked incredibly hard on this Bill and drawn out the really nasty bits. Under the Data Protection Act 1998, individuals have a qualified right not to be subject to purely automated decision-making and, to the extent that automated decision-making is permitted, they have a right to access information relating to such decisions made about them. The GDPR clarifies and extends these rights to the point that automated decisions that engage a person’s human rights are not permissible.
This could include being subjected to unfair discrimination. The noble Lord, Lord Clement-Jones, used the phrase, “unintended discrimination”—for example, detecting sexuality or diagnosing depression. The rapidly growing field of machine learning and algorithmic decision-making presents some new and very serious risks to our right to a private life and to freedom of expression and assembly. Such automated decision-making is deeply worrying when done by law enforcement agencies or the intelligence services because the decisions could have adverse legal effects. Such processing should inform rather than determine officers’ decisions.
We must have the vital safeguard for human rights of the requirement of human involvement. After the automated decision-making result has come out, there has to be a human who says whether or not it is reasonable.