My Lords, I am pleased that we have an opportunity, in this group of amendments, to talk about suicide and self-harm content, given the importance of it. It is important to set out what we expect to happen with this legislation. I rise particularly to support Amendment 225, to which my noble friend Lady Parminter added her name. I am doing this more because the way in which this kind of content is shared is incredibly complex, rather than simply because of the question of whether it is legal or illegal.
5.45 pm
Our goal in the regulations should actually be twofold. First, we do, of course, want to reduce the likelihood that somebody who is at lower risk of suicide and self-harm might move into the higher-risk group as a result of their online activity on user-to-user services. That is our baseline goal. We do not want anyone to go from low risk to high risk. Secondly, as well as this “create no new harm” goal, we have a harm reduction goal, which is that people who are already at a higher risk of suicide and self-harm might move into a lower-risk category through the use of online services to access advice and support. It is really important, in this area, that we do not lose sight of the fact there are two
aspects to the use of online services. It is no simple task to try to achieve both these goals, as they can sometimes be in tension with each other in respect of particular content types.
There is a rationale for removing all suicide and self-harm content, as that is certainly a way to achieve that first goal. It makes it less likely that a low-risk person would encounter—in the terms of the Bill—or be exposed to potentially harmful content. Some countries certainly do take that approach. They say that anything that looks like it is encouraging suicide or self-harm should be removed, full stop. That is a perfectly rational and legitimate approach.
There is, however, a cost to this approach, which I wish to tease out. It would be helpful in this debate to understand that, and it might not be immediately apparent what that cost is. It is that there are different kinds of individuals posting this content, so if we look at the experience of what happens on online platforms, there certainly is a community of people who post content with the express aim of hurting others: people who we often call trolls, who are small in number but incredibly toxic. They are putting out suicide and self-harm content because they want other people to suffer. They might think it is funny, but whatever they think, they are doing it with an expressly negative intent.
There is also a community of individuals and organisations who believe that they are sharing content to help those who are at risk. This can vary: some can be formal organisations such as the Samaritans, and others can be enterprising individuals, sometimes people who themselves had experiences that they wish to share, who will create online fora and share content. It might be content that looks similar to content that appears harmful, but their expressed goal is seeking to help others online. Most of these fora are for that purpose. Then there are the individuals themselves, who are looking for advice and support relevant to what is happening in their own lives and to connect with others who share their experiences.
We might see the same piece of content very differently when posted by people in these groups. If an individual in that troll group is showing an image of self-harm, that is an aggressive, harmful act; there is no excuse for it, and we want to get rid of it. However, the same content might be part of an educational exchange when posted by an expert organisation. The noble Baroness, Lady Finlay, said that we needed to make sure that this new legislation did not inadvertently sweep up those who were in that educational space.
The hardest group is the group of individuals, where, in many cases, the posting of that content is a cry for help, and an aggressive response by the platform can, sadly, be counterproductive to that individual if they have gone online to seek help. The effect of that is that the content is removed and, because they violated the platform’s terms of service, that person who is feeling lonely and vulnerable might lose social media accounts that are important to them for seeking help. Therefore, by seeking to reduce their exposure to content, we might inadvertently end up creating a scenario in which they lose all that is valuable to them. That is the other inadvertent harm that we want to ensure we avoid in regulating and seeking to have Ofcom issue the most appropriate guidance.
We should be able to advance both goals: removing the content that is posted with harmful intent but enabling content that is there as a cry for help, or as a support and advice service. It is in that context that something like the proposal for an expert group for Ofcom is very helpful. Again, having worked at a platform, I can say that we often reached out to advisers and sought help. Sometimes, the advice was conflicting. Some people would say it was really important that if someone was sharing images of self-harm they should be got rid of; others would say that, in certain contexts, it was really important to allow that person to share the image of self-harm and have a discussion with others—and that maybe the response was to use that as a trigger, to point them towards a support service that they need.
Again, when somebody is at imminent risk of suicide, protocols were developed to deal with that when the solution is nothing that the platform can do. If a platform has detected that somebody is at imminent risk of suicide, it needs to find a way to ensure that either a support body such as the Samaritans or, in many cases, the police are notified so that they can go to that person’s house, knock on the door and prevent the suicide happening. Platforms in some countries have the relationships that they need with local bodies. Giving that advice is very sensitive; you are disclosing highly sensitive personal data to an outside body, against the individual’s wishes. There will not be consent from them, in many cases, and that has to be worked through.
If we are thinking about protocols for dealing with self-harm content, we will reach some of the same issues. It may be that informing parents, a school or some other body to get help to that individual would be the right thing to do. That is very sensitive in terms of the data disclosure and privacy aspects.
The Bill is an opportunity to improve all of this. There are pieces of very good practice and, clearly, areas where not enough is being done and too much very harmful content—particularly content that is posted with the express intent of causing harm—is being allowed to circulate. I hope that, through the legislation and by getting these protocols right, we can get to the point where we are both preventing lower-risk people moving into a higher-risk category and enabling people already in a high-risk category to get the help, support and advice that they need. Nowadays, online is often the primary tool that could benefit them.