Dear Damian Hinds,
I am writing to you with some thoughts on the practice and practicality of the Online Safety Bill, which is currently at Committee stage in the House of Commons. The subject of regulation and responsibility on the internet has long been of interest to me, and I would appreciate hearing your ideas regarding the doubts I have about the potential of this Bill to reduce online harm — and the real life violence it can lead to.
There are some interesting ideas in this Bill, and I am intrigued to see if it does indeed achieve its intended aims. The extension of OFCOM’s oversight into online content, and the placing of responsibility for user-generated content on the shoulders of the platforms hosting it are both measures that have some potential to bring the desired changes. However, I have some doubts about the practicality of this legislation, and I wonder if you could shed some light on how it might move from theory into practice.
My practical query is regarding the requirements laid out in Clause 9 of the Bill, which state that the service provider (e.g. Facebook or Twitter) must ‘prevent individuals from encountering priority illegal content by means of the service’, ‘minimise the length of time for which any priority illegal content is present’, and ‘swiftly take down such content’ after being made aware of it. The question I would like to posit is what specific methods of content moderation do you see as being admissible under these requirements? I ask because many platforms would insist they already take these measures, however there are still problems with content moderation within all the biggest websites and apps hosting user-generated content. For example, the Facebook Files leaked by data engineer Frances Haugen last year illustrated that Facebook does have a content moderation policy, the problem is more that this policy is riddled with bias and focuses on profit over the wellbeing of its users.
Whilst Facebook did at least commit to an independent audit of its community standards enforcement last year, YouTube for instance has done no such thing. YouTube’s algorithmic recommendation has long been a concern among many people researching online radicalisation, and the website only changed its hate speech policy to one that bans neo-Nazi and Holocaust denial content in 2019, a shockingly late decision. Then, it only banned ‘content that targets an individual or group with conspiracy theories that have been used to justify real-world violence’ in October of 2021 — despite this move presumably being mainly aimed at Qanon content, which had been online since 2017 and concerning many researchers in this area since 2018. This content also grew in popularity and production during the lockdowns of 2020, further emphasising the lateness of YouTube’s action here. As I’m sure you know, Jake Davison, the recent Plymouth shooter, also had a YouTube channel where he espoused his misogynistic views, which was only taken offline after his crime.
So, the question is what methods of moderation and content oversight will this Bill be asking of companies such as these? Will you be requiring that they have a certain number of content moderation staff? Or that they remove content within a specific timeframe? The ‘Plandemic’ video that espoused conspiracy theories about the Covid-19 pandemic for instance, was only taken down after it had amassed millions of monetised views and drawn in plenty of users — which was of course in YouTube’s best interest, no matter the nature of the video. So, with all this in mind I would love to know what you are envisaging the practical implementation of this Bill to look like. I am also curious about the practicality of legislating websites hosting content in multiple countries around the world — is this going to make the experience of platforms hosting user-generated content different specifically in the UK? How do these laws interact with the fact that many tech companies are based in the US?
Overall, I think that as I said, there are some interesting ideas in this Bill and I am intrigued to see how much of an impact it will have. As someone who values the libertarian history that is part of the internet’s development, I am conflicted about increased oversight online, but the evidence indicating the influence of de-platforming cannot be ignored. I would be interested to hear your thoughts on this Bill’s implementation, and how it might improve the British public’s experience online.