My Lords, I welcome this opportunity to clarify the purposes of the Bill, but I am not sure that the amendment helps as my North Star. Like the Bill, it throws up as many questions as answers, and I found myself reading it and thinking “What does that word mean?”, so I am not sure that clarity was where I ended up.
It is not a matter of semantics, but in some ways you could say—and certainly this is as publicly understood—that the name of the Bill, the Online Safety Bill, gives it its chief purpose. Yet however well-intentioned, and whatever the press releases say or the headlines print, even a word such as “safety” is slippery, because safety as an end can be problematic in a free society. My worry about the Bill is unintended consequences, and that is not rectified by the amendment. As the Bill assumes safety as the ultimate goal, we as legislators face a dilemma. We have the responsibility of weighing up the balance between safety and freedom, but the scales in the Bill are well and truly weighted towards safety at the expense of freedom before we start, and I am again not convinced the amendment weights them back again.
Of course, freedom is a risky business, and I always like the opportunity to quote Karl Marx, who said:
“You cannot pluck the rose without its thorns!”
However, it is important to recognise that “freedom” is not a dirty word, and we should avoid saying that risk-free safety is more important than freedom. How would that conversation go with the Ukrainian people
who risk their safety daily for freedom? Also, even the language of safety, or indeed what constitutes the harms that the Bill and the amendments promise to keep the public safe from, need to be considered in the cultural and social context of the norms of 2023. A new therapeutic ethos now posits safety in ever-expanding pseudo-psychological and subjective terms, and this can be a serious threat to free speech. We know that some activists often exploit that concept of safety to claim harm when they merely encounter views they disagree with. The language of safety and harm is regularly used to cancel and censor opponents—and the Government know that, so much so that they considered it necessary to introduce the Higher Education (Freedom of Speech) Bill to secure academic freedom against an escalating grievance culture that feigns harm.
Part of the triple shield is a safety duty to remove illegal content, and the amendment talks about speech within the law. That sounds unobjectionable—in my mind it is far better than “legal but harmful”, which has gone—but, while illegality might sound clear and obvious, in some circumstances it is not always clear. That is especially true in any legal limitations of speech. We all know about the debates around hate speech, for example. These things are contentious offline and even the police, in particular the College of Policing, seem to find the concept of that kind of illegality confusing and, at the moment, are in a dispute with the Home Secretary over just that.
Is it really appropriate that this Bill enlists and mandates private social media companies to judge criminality using the incredibly low bar of “reasonable grounds to infer”? It gets even murkier when the legal standard for permissible speech online will be set partly by compelling platforms to remove content that contravenes their terms and conditions, even if these terms of service restrict speech far more than domestic UK law does. Big tech is being incited to censor whatever content it wishes as long as it fits in with their Ts & Cs. Between this and determining, for example, what is in filters—a whole different issue—one huge irony here, which challenges one of the purposes of the Bill, is that despite the Government and many of us thinking that this legislation will de-fang and regulate big tech’s powers, actually the legislation could inadvertently give those same corporates more control of what UK citizens read and view.
Another related irony is that the Bill was, no doubt, designed with Facebook, YouTube, Twitter, Google, TikTok and WhatsApp in mind. However, as the Bill’s own impact assessment notes, 80% of impacted entities have fewer than 10 employees. Many sites, from Wikipedia to Mumsnet, are non-profit or empower their own users to make moderation or policy decisions. These sites, and tens of thousands of British businesses of varying sizes, perhaps unintentionally, now face an extraordinary amount of regulatory red tape. These onerous duties and requirements might be actionable if not desirable for larger platforms, but for smaller ones with limited compliance budgets they could prove a significant if not fatal burden. I do not think that is the purpose of the Bill, but it could be an unintended outcome. This also means that regulation could, inadvertently, act as barrier to entry to new SMEs,
creating an ever more monopolistic stronghold for big tech, at the expense of trialling innovations or allowing start-ups to emerge.
I want to finish with the thorny issue of child protection. I have said from the beginning—I mean over the many years since the Bill’s inception—that I would have been much happier if it was more narrowly titled as the Children’s Online Safety Bill, to indicate that protecting children was its sole purpose. That in itself would have been very challenging. Of course, I totally agree with Amendment 1’s intention
“to provide a higher level of protection for children than for adults”.
That is how we treat children and adults offline.
5.30 pm
However, even then there are dilemmas. For example, if a filter for suicide might prevent a teenage user seeing some of the most awful, hideous and nihilistic images—those that we have in mind that the Bill’s purpose is to get rid of—how do we ensure it does not also reduce that teenager’s exposure to help, which they might want if they are feeling suicidal? How do we ensure that they are not denied valuable news items, debate and discussion for educational merit? Parents and society have those sorts of cost-benefit analysis challenges every day. Everyone wants their own children, indeed wants all children, to be kept safe from harms. But we do not lock children in their bedroom 24/7 just in case they encounter risk. We know that that would deprive them of crucial developmental opportunities to grow and learn, and to manage risk. A whole body of educational scholarship exists looking at some of the downsides of adult fears creating a generation of cotton-wool kids. That has been detrimental to children’s resilience, and children are often victims when adults overprotect. So I would just warn against overselling the Bill as a guarantee of risk-free safety for the young online, at any cost.
The whole issue of children is a difficult area. I know to my cost, from a rather ill-chosen way in which I expressed myself in a newspaper interview some years ago on the dilemmas of child protection versus free speech, that mis-speaking can mean being branded as complacent or even as an apologist for the most heinous horrors that can be inflicted on the young, from grooming to access to pornography. However, when people say “Think of the children”, or when we are rightly reminded to consider the tragedy of Molly Russell, for example, we can find ourselves chilled into walking on eggshells and not saying what we think.
We need to be bravely dispassionate in our discussions on protecting children online, and to scrutinise the Bill carefully for unintended consequences for children. But we must also avoid allowing our concern for children to spill over into infantilising adults and treating adult British citizens as though they are children who need protection from speech. There is a lot to get through in the Bill but the amendment, despite its good intentions, does not resolve the dilemmas we are likely to face in the following weeks.