My Lords, this amendment and Amendments 74, 93 and 123 are part of a larger group that have been submitted as a package loosely referred to as the AV and harms package. They have been the subject of much private debate with the Government, for which we are grateful, and among parliamentarians, and have featured prominently in the media. The amendments are in my name and those of the noble Lord, Lord Bethell, the right reverend Prelate the Bishop of Oxford and the noble Lord, Lord Stevenson, but enjoy the support of a vast array of Members of both Houses. I thank all those who have voiced their support.
The full package of amendments defines and sets out the rules of the road for age assurance, including the timing of its introduction, and the definition of terms such as age verification and age assurance. They introduce the concept of measuring the efficacy of systems with one eye on the future so that we as parliamentarians can indicate where and when we feel that proportionality is appropriate and where it is simply not—for example, in relation to pornography. In parallel, we have developed a schedule of harms, which garners rather fewer column inches but is equally important in establishing Parliament’s intention. It is that schedule of harms that is up for debate today.
Before I lay out the amendment, I thank the 26 children’s charities which have so firmly got behind this package and acknowledge, in particular, Barnardo’s, CEASE and 5Rights, of which I am chair, which have worked tirelessly to ensure that the full expertise of children’s charities has been embedded in these amendments. I also pay tribute to the noble Baroness, Lady Benjamin, who in this area of policy has shown us all the way.
The key amendment in this group is Amendment 93, which would place a schedule of harms to children in the Bill. There are several reasons for doing so, the primary one being that by putting them in the Bill we are stating the intention of Parliament, which gives clarity to companies and underlines the authority of Ofcom to act on these matters. Amendments 20, 74 and 123 ensure that the schedule is mirrored in risk assessments and tasks Ofcom with updating its guidance every six months to capture new and emerging harms, and as such are self-evident.
The proposed harms schedule is centred around the four Cs, a widely used and understood taxonomy of harm used in legislation and regulation around the globe. Importantly, rather than articulate individual harms
that may change over time, it sets its sight on categories of harm: content, contact, conduct and contract, which is sometimes referred to as commercial harm. It also accounts for cumulative harms, where two or more risk factors create a harm that is greater than any single harm or is uniquely created by the combination. The Government’s argument against the four Cs is that they are not future-proof, which I find curious since the very structure of the four Cs is to introduce broad categories of harm to which harms can be added, particularly emerging harms. By contrast, the Government are adding an ever-growing list of individual harms.
I wish to make three points in favour of our package of amendments relating first to language, secondly to the nature of the digital world, and finally to clarity of purpose. It is a great weakness of the Bill that it consistently introduces new concepts and language—for example, the terms “primary priority content”, “priority content” and “non-designated content”. These are not terms used in other similar Bills across the globe, they are not evident in current UK law and they do not correlate with established regimes, such as equalities legislation or children’s rights under the convention, more of which in group 7.
The question of language is non-trivial. It is the central concern of those who fight CSAE around the world, who frequently find that enforcement against perpetrators or takedown is blocked by legal systems that define child sexual abuse material differently—not differently in some theoretical sense but because the same image can be categorised differently in two countries and then be a barrier to enforcement across jurisdictions. Leadership from WeProtect, the enforcement community and representatives that I recently met from Africa, South America and Asia have all made this point. It undermines the concept of UK leadership in child protection that we are wilfully and deliberately rejecting accepted language which is embedded in treaties, international agreements and multilateral organisations to start again with our own, very likely with the same confused outcome.
Secondly, I am concerned that while both the Bill and the digital world are predicated on system design, the harms are all articulated as content with insufficient emphasis on systems harms, such as careless recommendations, spreading engagement and the sector-wide focus on maximising engagement, which are the very things that create the toxic and dangerous environment for children. I know, because we have discussed it, that the Minister will say that this is all in the risk assessment, but the risk assessment asks regulated companies to assess how a number of features contribute to harm, mostly expressed as content harm.
What goes through my mind is the spectre of Meta’s legal team, which I watched for several days during Molly Russell’s inquest; they stood in a court of law and insisted that hundreds, in fact thousands, of images of cut bodies and depressive messages did not constitute harm. Rather, they regarded them as cries for help or below the bar of harm as they interpreted it. Similarly, there was material that featured videos of people jumping off buildings—some of them sped-up versions of movie clips edited to suggest that jumping was freedom—and I can imagine a similar argument that
says that kind of material cannot be considered harmful, because in another context it is completely legitimate. Yet this material was sent to Molly at scale.
5.15 pm
It is not good enough to characterise harms simply by establishing what is or is not harmful content. The previous debate really underlined that it takes a long time and it is very complicated to see what is harmful. But we must make utterly clear that the drip feed of nudges, enticements and recommendations and the creation of a toxic environment, overwhelming a child of 14 with more than 1,400 messages, whether they meet that bar of harmful content or not, is in itself a harm. A jukebox of content harms is not future-proof, and it fails to name the risks of the system. It is to misunderstand where the power of digital design actually lies.
Finally, there is the question of simplicity and clarity. As we discussed on the first day of Committee, business wants clarity, campaigners want clarity, parents want clarity, and Ofcom could do with some clarity. If not the four Cs, my challenge to the Government is to deliver a schedule that has the clarity and simplicity of the amendments in front of us, in which harm is defined by category not by individual content measurements, so that it is flexible now and into the future, and foregrounds the specific role of the system design not only as an accomplice to the named harm but as a harm itself. I beg to move.