UK Parliament / Open data

Online Safety Bill

My Lords, we are coming to some critical amendments on a very important issue relatively late in the Bill, having had relatively little discussion on it. It is not often that committees of this House sit around and say, “We need more lawyers”, but this is one of those areas where that was true.

Notwithstanding the blushes of my noble friend on the Front Bench here, interestingly we have not had in our debate significant input from people who understand the law of freedom of expression and wish to contribute to our discussions on how online platforms should deal with questions of the legality of content. These questions are crucial to the Bill, which, if it does nothing else, tells online platforms that they have to be really robust in taking action against content that is deemed to be illegal under a broad swathe of law in the United Kingdom that criminalises certain forms of speech.

We are heavy with providers, and we are saying to them, “If you fail at this, you’re in big trouble”. The pressure to deal with illegal content will be huge, yet illegality itself covers a broad spectrum, from child sexual exploitation and abuse material, where in many cases it is obvious from the material that it is illegal and there is strict liability—there is never any excuse for distributing that material—and pretty much everyone everywhere in the world would agree that it should be criminalised and removed from the internet, through to things that we discussed in Committee, such as public order offences, where, under some interpretations of Section 5 of the Public Order Act, swearing at somebody or looking at them in a funny way in the street could be deemed alarming and harassing. There are people who interpret public order offences in this very broad sense, where there would be a lot less agreement about whether a specific action is or is not illegal and whether the law is correctly calibrated or being used oppressively. So we have this broad spectrum of illegality.

The question we need to consider is where we want providers to draw the line. They will be making judgments on a daily basis. I said previously that I had to make those judgments in my job. I would write to lawyers and they would send back an expensive piece of paper that said, “This is likely to be illegal”, or, “This is likely not to be illegal”. It never said that it was definitely illegal or definitely not illegal, apart from the content I have described, such as child sexual abuse. You would not need to send that, but you would send the bulk of the issues that we are dealing with to a lawyer. If you sent it to a second lawyer, you would get another “likely” or “not likely”, and you would have to come to some kind of consensus view as to the level of risk you wished to take on that particular form of speech or piece of content.

This is really challenging in areas such as hate speech, where exactly the same language has a completely different meaning in different contexts, and may or may not be illegal. Again, to give a concrete example, we would often deal with anti-Semitic content being shared by anti-anti-Semitic groups—people trying to raise awareness of anti-Semitic speech. Our reviewers would quite commonly remove the speech: they would see it and it would look like grossly violating anti-Semitic speech. Only later would they realise that the person

was sharing it for awareness. The N-word is a gross term of racial abuse, but if you are an online platform you permit it a lot of the time, because if people use it self-referentially they expect to be able to use it. If you start removing it they would naturally get very upset. People expect to use it if it is in song lyrics and they are sharing music. I could give thousands of examples of speech that may or may not be illegal depending entirely on the context in which it is being used.

We will be asking platforms to make those judgments on our behalf. They will have to take it seriously, because if they let something through that is illegal they will be in serious trouble. If they misjudged it and thought the anti-Semitic hate speech was being circulated by Jewish groups to promote awareness but it turned out it was being circulated by a Nazi group to attack people and that fell foul of UK law, they would be in trouble. These judgments are critical.

We have the test in Clause 173, which says that platforms should decide whether they have “reasonable grounds to infer” that something is illegal. In Committee, we debated changing that to a higher bar, and said that we wanted a stronger evidential basis. That did not find favour with the Government. We hoped they might raise the bar themselves unilaterally, but they have not. However, we come back again in a different way to try to be helpful, because I do not think that the Government want excessive censorship. They have said throughout the Bill’s passage that they are not looking for platforms to be overly censorious. We looked at the wording again and thought about how we could ensure that the bar is not operated in a way that I do not think that the Government intend. We certainly would not want that to happen.

We look at the current wording in Clause 173 and see that the test there has two elements. One is: “Do you have reasonable grounds to infer?” and then a clause in brackets after that says, “If you do have reasonable grounds to infer, you must treat the content as illegal”. In this amendment we seek to remove the second part of that phrasing because it seems problematic. If we say to the platform, “Reasonable grounds to infer, not certainty”—and it is weird to put “inference”, which is by definition mushy, with “must”, which is very certain, into the same clause—we are saying, “If you have this mushy inference, you must treat it as illegal”, which seems quite problematic. Certainly, if I were working at a platform, the way I would interpret that is: “If in doubt, take it out”. That is the only way you can interpret that “must”, and that is really problematic. Again, I know that that is not the Government’s intention, and if it were child sexual exploitation material, of course you “must”. However, if it is the kind of abusive content that you have reasonable grounds to infer may be an offence under the Public Order Act, “must” you always treat that as illegal? As I read the rest of the Bill, if you are treating it as illegal, the sense is that you should remove it.

That is what we are trying to get at. There is a clear understanding from the Government that their intention is “must” when it comes to that hard end of very bad, very clearly bad content. However, we need something else—a different kind of behaviour where we are dealing with content where it is much more marginal. Otherwise, the price we will pay will be in freedom of expression.

People in the United Kingdom publish quite robust, sweary language. I sometimes think that some of the rules we apply penalise the vernacular. People who use sweary, robust language may be doing so entirely legally—the United Kingdom does not generally restrict people from using that kind of language. However, we risk heading towards a scenario where people post such content in future, and they will find that the platform takes it down. They will complain to the platform, saying, “Why the hell did you take my content down?”—in fact, they will probably use stronger words than that to register their complaint. When they do, the platform will say, “We had reasonable grounds to infer that that was in breach of the Public Order Act, for example, because somebody might feel alarmed, harassed or distressed by it. Oh, and look—in this clause, it says we ‘must’ treat it as illegal. Sorry—there is nothing else we can do. We would have loved to have been able to exercise the benefit of the doubt and to allow you to carry on using that kind of language, because we think there is some margin where you have not behaved in an illegal way. But unfortunately, because of the way that Clause 173 has been drafted, our lawyers tell us we cannot afford to take the risk”.

In the amendment we are trying to—I think—help the Government to get out of a situation which, as I say, I do not think they want. However, I fear that the totality of the wording of Clause 173, this low bar for the test and the “must treat as” language, will lead to that outcome where platforms will take the attitude: “Safety first; if in doubt, take it out”, and I do not think that that is the regime we want. I beg to move.

About this proceeding contribution

Reference

831 cc2135-7 

Session

2022-23

Chamber / Committee

House of Lords chamber
Back to top