My Lords, it is a great pleasure to speak to this group of amendments. As it is the first time I have spoken at this stage of the Bill’s proceedings, I declare my interest as a trustee and founder of the mental health charity the Loughborough Wellbeing Centre, which is relevant to this group. If it is lawyers’ confession time, then I am also going to confess to being a non-practising solicitor. But I can assure those Members of the House who are not lawyers that they do not need to be lawyers or ex-lawyers to understand the very simple proposition at the heart of this group of amendments.
Amendments 34 and 35 are in my name, along with those of the noble Baroness, Lady Parminter, the right reverend Prelate the Bishop of Gloucester and the noble Lord, Lord Griffiths of Burry Port. I am very grateful to them for their support for these amendments, which are also supported by the Football Association, Kick It Out, Beat, YoungMinds, the Royal College of Psychiatrists, the British Psychological Society, Mind, the Mental Health Network, the NHS Confederation, Rethink Mental Illness and Mental Health UK. I thank particularly the Mental Health Foundation for its support with making the points that we will cover in this group.
As we have already heard, and rightly, it is difficult with a Bill of this complexity to debate just one topic in a particular group. Although I have not spoken, it has been a great privilege to listen to your Lordships on earlier groups. We have already talked this afternoon and previously about the Government’s triple-shield approach and the replacement of that for the “legal but harmful” provisions that were taken out of the Bill. We have heard that the triple shield consists of the removal of illegal content, the takedown of material in breach of own terms of service—we have just been talking about that—and the provision to adults of greater choice over the content that they see online using these platforms. What we are talking about in this group of amendments is that third leg—I had put “limb” but have changed it because of what my noble friend Lady Fraser said—of the triple-shield categories, so that user empowerment tools should be on by default.
The change suggested by this proposal would require users on these platforms to flip a switch and choose whether to opt in to some of the most dangerous content available online, rather than receiving it by default. This adopts the Government’s existing approach of giving users choice over what they see but ensures that the default is that they will not be served this kind of material unless they actively choose to see it. The new offence on encouragement to serious self-harm, which the Government have committed to introducing, might form part of the solution here. But we cannot criminalise all the legal content that treads the line between glorification and outright encouragement, and no similar power is proposed to address eating disorder content. I know that others will talk about
that, and I pay tribute to the work of Vicky Ford MP in relation to eating disorders; she has been brave enough to share her own experiences of those disorders.
During the Bill’s journey through Parliament, we have heard how vulnerable users often internalise the harmful and hateful content that they see online, which in turn can lead to users deliberately seeking out harmful content in an attempt to normalise self-destructive thoughts and behaviours. We have heard how Molly Russell, for example, viewed tweets which normalised her thoughts on self-harm and suicide; we have also heard how people with eating disorders often get what is called “inspiration” on platforms such as Tumblr, Instagram and TikTok.
We know from various studies that viewing this content has a negative effect on people’s mental well-being. A study carried out by the University of Oxford found that viewing images of self-harm often encouraged individuals to start self-harming, and concluded:
“Young people who self-harm are likely to use the internet in ways that increases their risk”.
Research by the Samaritans provided similar results, with 77% of respondents answering that they sometimes or often self-harmed in the same or similar ways after viewing self-harm imagery.
4.45 pm
The Mental Health Foundation polled over 3,300 people and found that 67% of the public agreed or strongly agreed that they do not wish to be exposed to harmful content unless they explicitly choose to see it. I think my noble friend the Minister, perhaps not referring to this research, also said this earlier.
As we have also heard from the noble Baroness, Lady Merron, who is not in her place, even if a user is not searching for harmful content, they can be led to it through the algorithms. This includes pro-suicide, pro-self-harm, pro-anorexia and pro-bulimia content. In other words, it is too easy for users to see harmful content on these platforms, and this needs to change.
The Government chose to change from the legal but harmful to the triple-shield approach. However, the user empowerment tools introduced are neither new nor ground-breaking, because a lot of social media platforms already claim to have filters in place, giving users the ability to hide certain content from their timelines. But many users do not know that they are there, or how to use them properly. As it stands, the Government’s solution will be largely ineffective unless these tools are on by default.
Another point I suspect others will make, which we heard in the briefings before this group, is that vulnerability does not stop at the age of 18, so why would there be a cliff edge where there is protection from known harmful content for those under 18 but not for those over 18? As somebody made clear in the Samaritans briefing, which a number of us attended, people can be sectioned for their own protection after the age of 18. Adults, and particularly the vulnerable, may not be in a position to self-protect, and the trouble with not having the tools on by default is that we are yet again putting the burden to self-protect on the vulnerable and potential victims without taking responsibility as a society for this.
There is of course a wider point here—perhaps not for this debate but I am sure it will come up again—which is that not seeing the content does not mean that it does not exist. We will return to this when we debate content that is violent against women and girls. The noble Baroness, Lady Fox, has already referred to the content set out in subsections (10), (11) and (12) of this clause. Does the fact that it is listed mean we are saying that such harmful content is still OK to circulate on the internet, just because people are not seeing it? I would say this raises broader questions, but it is perhaps not a debate for today.
These two amendments would ensure that platforms’ design involves the safest options being on by default. They are two straightforward, common-sense amendments that, as the noble Viscount, Lord Colville—who is not here now—said, balance the understandable concerns about freedom of speech with safety. They do not stop the publication of this objectionable material, but they offer others, particularly the most vulnerable, a real choice about whether they see it. I would argue that it is our minimum duty to make sure these safety protections are on by default. I beg to move.