UK Parliament / Open data

Online Safety Bill

My Lords, I am pleased to speak on this group of amendments, and I will particularly address the amendments in the name of my noble friend Lord Stevenson. To start with the very positive, I am very grateful to the Minister for signing Amendment 40 —as has already been commented, this is hopefully a sign of things to come. My observation is that it is something of a rarity, and I am containing my excitement as it was agreement over one word, “effectively”. Nevertheless, it is very welcome support.

These amendments aim to make it clearer to users whether those whom they interact with are verified or non-verified, with new duties backed up by a set of minimum standards, to be reflected in Ofcom’s future guidance on the user verification duty, with standards covering—among other things—privacy and data protection. The noble Lord, Lord Clement-Jones, helpfully referred your Lordships’ House to the report of the Joint Committee and spent some useful time on the challenges over anonymity. As is the case with so many issues on other Bills and particularly on this one, there is a balance to be struck. Given the proliferation of bots and fake profiles, we must contemplate how to give confidence to people that they are interacting with real users.

Amendment 141 tabled by my noble friend Lord Stevenson and supported by the noble Lord, Lord Clement- Jones, requires Ofcom to set a framework of principles and minimum standards for the user verification duty. The user verification duty is one of the most popular changes to be made to the Bill following the pre-legislative scrutiny process and reflects a recommendation of the Joint Committee. Why is it popular? Because the public understand that the current unregulated approach by social media platforms is a major enabler of harmful online behaviour. Anonymous accounts are more likely to engage in abuse or harassment and, for those at the receiving end, threats from anonymous accounts can feel even more frightening, while the chances are lower of any effective enforcement from the police or platforms.

As we know, bad actors use networks of fake accounts to peddle disinformation and divisive conspiracy theories. I am sure that we will come back to this in later groups. This amendment aims to ensure that the user verification duty delivers in the way that the public and the Government hope that it will. It requires that the systems which platforms develop in response to the duty are sufficiently rigorous and accessible to all users.

The noble Baroness, Lady Kidron, talked about affordability, something that I would like to amplify. There will potentially be platforms which try to claim that verification systems somehow genuinely verify a

user’s identity when they do not, or they will be unaffordable to ordinary users, as the noble Baroness said, or data will be used inappropriately. This is not theoretical. She referred to the Meta-verified product, which looks like it might be more rigorous, but at a cost of $180 per year per account, which will not be within the grasp of many people. Twitter is now also selling blue ticks of verification for $8, including a sale to those who are scamming, impersonating, and who are propagandists for figures in our world such as Putin. This amendment future-proofs and allows flexibility. It will not tie the hands of either the regulator or the platforms. Therefore, I hope that it can find some favour with the Minister.

In Amendment 303, again tabled by my noble friend Lord Stevenson and supported by the noble Lord, Lord Clement-Jones, there is an addition of the definition of “user identity verification”. I agree with the noble Lord about how strange it was that, in Committee in the Commons, Ministers felt that user identity verification was somehow an everyday term which did not need definition. I dispute that. It is no better left to common sense than any other terms that we do have definitions for in Clause 207—for example, “age assurance”, “paid-for advertisement” and “terms of service”. All these get definitions. Surely it is very wise to define user identity verification.

8.15 pm

Without definition, there is obviously scope for dispute about how verification is defined. As we heard earlier in Committee, a dispute over what something means only creates the conditions for uncertainty, delay and legal costs. Therefore, I hope that we can see a brief definition that provides clarity for regulators and platforms and reduces the potential for disputes and enforcement delays. If we could rely on platforms to operate in good faith, in the interests of all of us, we would not even need the Bill.

Amendment 41, again tabled by my noble friend Lord Stevenson and supported by the noble Lord, Lord Clement-Jones, would require category 1 services to make visible to users whether another user is verified or non-verified. There is already a duty to allow users to be verified and to allow all users to filter out interaction with unverified accounts, but these duties must be—to use that word again—effective.

In cases of fraud, we well know that online scammers rely heavily on deceptive fake accounts, often backed up by reviews from other fake accounts, and that they will think twice about going through any credible verification process because it will make them more traceable. So a simple and clear piece of advice, if we become able to use it, would be to check if the user you are interacting with is verified. That would be powerful advice for consumers to help them avoid fraud.

In the case of disinformation—again, something we will return to in a later group—bad actors, including foreign Governments, are setting up networks of fake accounts which make all sorts of false claims about their identity: maybe that they are a doctor, a British Army veteran or an expert in vaccines. We have seen and heard them all. We ask the public to check the source of the information they read, and that would be a lot easier if it was obvious who is verified and

who is not. For those who are subject to online abuse or threats, being able to see if an account is verified would empower them to make more informed decisions about the source of the problem, and therefore to take more definitive steps to protect themselves.

It is absolutely right, as the noble Baronesses, Lady Bull and Lady Fox, outlined, that there are very legitimate reasons why some people do not want their identity shared when they are using a service. This issue was raised with me by a number of young people that I, like other noble Lords, had the opportunity to speak to at a meeting organised by the NSPCC. They explained how they experienced the online world and how they wanted to be able to use it, but there are times when they need to protect their identity in order to benefit from using it and to explore various aspects of themselves, and I believe we should enable that protection.

Amendments in this group from the noble Lord, Lord Moylan, bring us back to previous debates on crowdsourced sites such as Wikipedia, so I will not repeat the same points made in previous debates, but I feel sure that the Minister will provide the reassurance that the noble Lord seeks, and we all look forward to it.

I have a question for the Minister in concluding my comments on this group. Could he confirm whether, under the current provisions, somebody’s full name would have to be publicly displayed for the verification duty to have been met, or could they use a pseudonym or a generic username publicly, with verification having taken place in a private and secure manner? I look forward to hearing from the Minister.

About this proceeding contribution

Reference

829 cc1746-8 

Session

2022-23

Chamber / Committee

House of Lords chamber
Back to top