My Lords, I will not engage with the amendments of the noble Lord, Lord Moylan, since mine are probably the diametric opposite of what he has been saying.
I say, first, on behalf of the noble Baroness, Lady Finlay, that she regrets very much not being able to be here. Amendment 204 in her name is very much a Samaritans amendment. The Samaritans have encouraged her to put it forward and encourage us to support it. It is clear that the Minister has got his retaliation in first and taken the wind out of all our sails right at the beginning. Nevertheless, that does not mean that we cannot come back at the Minister and ask for further and better particulars of what he has to say.
Clearly the Government’s decision to bring in the new offence of encouraging or assisting self-harm is welcome. However—certainly in the view of the Samaritans—this will only bring into the remit of the Bill content that encourages serious self-harm, which must reach the high threshold amounting to grievous bodily harm. Their view, therefore, is that much harmful content will still be left untouched and available to criminals online. This could include information, depictions, instructions and advice on methods of self-harm and suicide. It would also include content that portrays self-harm and suicide as positive or desirable, and graphic descriptions or depictions of self-harm and suicide.
Perhaps the Minister could redouble his efforts to assure us as to how the Bill will take a comprehensive approach to placing duties on all platforms to reduce all dangerous suicide and self-harm content, such as detailed instructions on how people can harm themselves, for adults as well as children. This should also be in respect of smaller sites; it is not just the larger category 1 sites that will need to proactively remove priority illegal content, whatever the level of detail in their risk assessment. I hope I have done my duty by the noble Baroness, Lady Finlay, who very much regrets that she was not able to be here.
My own Amendments 55, 59, 64 and 181 are about changes in social media. The Bill really began its life at the high point of the phase where services were free to the user and paid for by adverts. The noble Lord talked about this being a Twitter Bill. Well, to some extent we are influenced by what Twitter has been doing over the last 12 months: it has begun to charge for user-verification services and some features, and other services are adopting versions of what you might call this premium model. So there is a real concern that Clause 12 might not be as comprehensive as the Minister seems to be asserting. I assume that it is covered by the “proportionate” wording in Clause 12, and therefore it would not be proportionate—to put it the other way round—if they charged for this service. I would very much like the Minister to give the detail of that, so I am not going to cover the rest of the points that I would otherwise have made.
The Minister said that a blanket approach would not be appropriate for user-empowerment control features. The thought that people have had is that a platform might choose to have a big red on/off button that would try to cover all the types of content that could be subject to this kind of user-empowerment tool. I do
not think the contents of Clause 12 are as clear as the Minister perhaps considers they could be, but they go with the grain of the new government amendments. I should have said right at the beginning—although many of us regret the deletion of “legal but harmful” from the original draft Bill—that the kind of assessment that is going to be made is a step in the right direction and demonstrates that the Minister was definitely listening in Committee. However, if a blanket approach of this kind is taken, that would not be in the spirit of where these user-empowerment tools are meant to go. I welcome what the Minister had to say, but again I would like the specifics of where he thinks the wording is helpful in making sure that we have a much more granular form of user-empowerment control feature when this eventually comes into operation.
Finally, I return to user verification. This is very much in the footsteps of the Joint Committee. The noble Baroness, Lady Merron, spoke very well in Committee to what was then Amendment 41, which was in the name of the noble Lord, Lord Stevenson. It would required category 1 services to make visible to users whether another user was verified or non-verified.
6.30 pm
Amendment 182 to Clause 57 is a rather different animal, but we are again trying to get improvements to what is in the clause at the moment. It tries to focus even more on empowering users by giving them choice. Alongside offering UK users a choice to verify, it will ensure that users are also offered a choice to make that verification visible to others. In a sense, it goes very much with the grain of what the Government have been moving towards with their approach to the use of user-empowerment tools and giving choice at the outset. That is, in a sense, the compromise between default and non-default, as we discussed in Committee. This offers users a different kind of choice, but nevertheless an important choice.
Just as the Bill would not force any UK users to verify, so this amendment would not force any UK users to make their choice to verify visible. All it would do is require that platforms offer them an option. Research suggests that most UK users would choose to verify and to make that visible; I am sure that the Minister is familiar with some of the research. New research published this week by Clean Up the Internet, based on independent opinion polling conducted by Opinium, found that 78% of UK social media users say that it would be helpful to be able to see which social media accounts have been verified to help them avoid scams. Almost as many—77%—say that being able to see which accounts have been verified would help with identifying bullies or trolls. Some 72% say it would help with spotting false or misleading news stories, and 68% say it would help with buying products or services.
Ofcom’s own research into online fraud, published in March this year, found:
“A warning from the platform that content or messages come from an unverified source”
is the single most popular measure platforms could introduce to help users avoid getting drawn into scams. So it would be an extremely popular move for the Minister to accept my amendment, as I am sure he would appreciate.