My Lords, my Amendment 43 tackles Clause 12(1), which expressly says that the duties in Clause 12 are to “empower” users. My concern is to ensure that, first, users are empowered and, secondly, legitimate criticism around the characteristics listed in Clause 12(11) and (12), for example, is not automatically treated as abusive or inciting hatred, as I fear it could be. My Amendment 283ZA specifies that, in judging content that is to be filtered out after a user has chosen to switch on various filters, the providers act reasonably and pause to consider whether they have “reasonable grounds” to believe that the content is of the kind in question—namely, abusive or problematic.
Anything under the title “empower adult users” sounds appealing—how can I oppose that? After all, I am a fan of the “taking back control” form of politics,
and here is surely a way for users to be in control. On paper, replacing the “legal but harmful” clause with giving adults the opportunity to engage with controversial content if they wish, through enhanced empowerment tools, sounds positive. In an earlier discussion of the Bill, the noble Baroness, Lady Featherstone, said that we should treat adults as adults, allowing them to confront ideas with the
“better ethics, reason and evidence”—[Official Report, 1/2/23; col. 735.]
that has been the most effective way to deal with ideas from Socrates onwards. I say, “Hear, hear” to that. However, I worry that, rather than users being in control, there is a danger that the filter system might infantilise adult users and disempower them by hard-wiring into the Bill a duty and tendency to hide content from users.
There is a general weakness in the Bill. I have noted that some platforms are based on users moderating their own sites, which I am quite keen on, but this will be detrimentally affected by the Bill. It would leave users in charge of their own moderation, with no powers to decide what is in, for example, Wikipedia or other Wikimedia projects, which are added to, organised and edited by a decentralised community of users. So I will certainly not take the phrase “user empowerment” at face value.
I am slightly concerned about linguistic double-speak, or at least confusion. The whole Bill is being brought forward in a climate in which language is weaponised in a toxic minefield—a climate of, “You can’t say that”. More nerve-rackingly, words and ideas are seen as dangerous and interchangeable with violent acts, in a way that needs to be unpicked before we pass this legislation. Speakers can be cancelled for words deemed to threaten listeners’ safety—but not physical safety; the opinions are said to be unsafe. Opinions are treated as though they cause damage or harm as viscerally as physical aggression. So lawmakers have to recognise the cultural context and realise that the law will be understood and applied in it, not in the abstract.
I am afraid that the language in Clause 12(1) and (2) shows no awareness of this wider backdrop—it is worryingly woolly and vague. The noble Baroness, Lady Morgan, talked about dangerous content, and all the time we have to ask, “Who will interpret what is dangerous? What do we mean by ‘dangerous’ or ‘harmful’?”. Surely a term such as “abusive”, which is used in the legislation, is open to wide interpretation. Dictionary definitions of “abusive” include words such as “rude”, “insulting” and “offensive”, and it is certainly subjective. We have to query what we mean by the terms when some commentators complain that they have been victims of online abuse, but when you check their timelines you notice that, actually, they have been subject just to angry, and sometimes justified, criticism.
I recently saw a whole thread arguing that the Labour Party’s recent attack ads against the Prime Minister were an example of abusive hate speech. I am not making a point about this; I am asking who gets to decide. If this is the threshold for filtering content, there is a danger of institutionalising safe space echo chambers. It can also be a confusing word for users, because if someone applies a user empowerment tool to protect themselves from abuse, the threshold at
which the filter operates could be much lower than they intend or envisage but, by definition, the user would not know what had been filtered out in their name, and they have no control over the filtering because they never see the filtered content.
5 pm
The same is true of the Bill’s use of the term “incites hatred”. The word “hatred” in 2023 is highly contentious in the public arena. Indeed, over the last decade Parliament has wrestled with criminal offences around the incitement of hatred, and safeguards were built into legislation in the past, including free speech clauses in controversial areas such as religion. However, it seems to me that in this Bill the word “hatred” is just free floating. A user who understands “incites hatred” to cover really malicious, nasty content might not realise how much other content could be filtered out by the filtering tool if it operates at a low threshold of understanding what inciting hatred is.
It is also the case that inciting hatred around protected characteristics is fraught as an issue offline, let alone online. There are huge rows about whether accusations of Islamophobia and inciting hatred of Muslims are sometimes used to avoid open debates on extreme Islamist views. For example, will images such as the cartoons in the Charlie Hebdo magazine be seen as inciting hatred by some, and will they get filtered out? Similarly, some say that accusations of anti-Semitism—inciting hatred of Jewish people—are used to quell legitimate criticism of Israeli policy. I could go on.
I am not making a comment on any of those issues, other than to note that those who think that using hatred as a basis for filtering online content is easy need to get out a bit more—and that is before we even get to the gender wars. Regularly, those who assert the immutability of biological sex are accused of whipping up hatred against trans people; Joanna Cherry MP has had a talk cancelled by the Stand Comedy Club for just that. Even though the label “transphobic hate speech” directed at Joanna Cherry MP is totally illegitimate, in my opinion, because she is a crusader for women’s rights and lesbian rights, it does not matter whether you and I agree or whether we should have an argument; that is what debate is. We have to ask who from a big tech company will filter out material or decide what is, or is not, hatred. These are the kinds of issues that, we have to note, are difficult.
It is worth asking the Minister: who do the Government envisage will do the filtering? Do online filterers, let alone algorithms or machine learning, have the qualifications to establish what constitutes abuse or hatred? In other professions, from the College of Policing to overzealous HR departments and senior management teams in universities, we have seen overcaution in censoring and banning material under the auspices of hatred, abuse and that weasel word “harm”. Rather than empowering users, will the Bill not empower a new staff team of filterers trained in their own company’s equality, diversity and inclusion norms to use filtering tools at the lowest common denominator, leading to over-removal policies that err on the side of caution in order to comply with regulations? All that Amendment 43 does is to borrow the language of “discussion or criticism” from the free speech clause in the stirring up hatred offences section
of the Public Order Act 1986 to try to lift the threshold at which Clause 12(11) and (12) might kick in. It is not ideal, but there is a lot at stake.
I completely oppose those amendments that promote a default setting. They are clearly advocating a censorious approach to legal speech. I rather liked an analogy that I heard the IEA’s Matthew Lesh use recently when he said, “Imagine if, when you go to a bookshop, you have to ask the shop assistant to let you into the special room that contains harmful books”. Of course, material is still accessible, but creating a barrier to accessing certain speech that is perhaps uncomfortable in terms of religion, race or gender also forces people to identify themselves. If you have to say, “Please can I go into the harmful speech section?”, or go into the harmful section of the bookshop, immediately you label yourself as pro-dangerous or pro-harmful material.
If those advocating these provisions are so certain about the righteousness of knowing that this speech is problematic, it would be more honest to simply outlaw it. What is more, the director of Defend Digital Me, Jen Persson, has raised concern that, by considering all adults to be at risk of harm in that way, the Bill will infantilise us, because it assumes that adults are inherently vulnerable. It is a sort of paternalistic Big Brother that we want to avoid in the Bill.
Finally, it is damaging in a democracy to have a proliferation of things that are unsayable. As the Bill reflects, so much debate takes place online, so it seems our responsibility as legislators to encourage a diversity of views to circulate, rather than carelessly or inadvertently to narrow the range of what circulates. On previous groups we mentioned Germany’s infamous legislation, brought in in 2017, which is now facing major opposition at home. Danish free-speech think tank Justitia notes that though
“the German government’s adoption of the NetzDG was a good faith initiative to curb hate online, the law has provided a blueprint for Internet censorship that is being used to target dissent and pluralism.”
I fear that unless we are very careful this section will do the same.