Text: Izzy Copestake
A new Online Safety Code has been drafted… but where is the mention of Ireland’s algorithm problem?
Today, the media regulator, Coimisiún na Meán, released an updated draft of the Online Safety Code to be submitted to the European Commission for assessment. If approved, the code will come into effect later this year and will apply to video-sharing platforms based in the EU. The Online Safety Code will be legally binding, and platforms could face fines of up to €20 million for breaking the rules.
A large section of the code focuses on the protection of children online. If the code is approved, social media companies will have to actively protect children from different types of harmful content, such as content that promotes eating disorders, suicide, or cyberbullying. There will also be new regulations clamping down on platforms which don’t prevent the circulation of illegal content, including incitement to hatred or violence. Companies may also have to start using age checks to prevent children from accessing pornographic content.
However, many feel there to be a huge omission from the proposed code: algorithms. These algorithms use social media users’ personal data to determine what content they are shown. This data includes search history, location, and gender. Just last month, a DCU study demonstrated how these algorithms have been used to bombard teenage boys with misogynistic content.
The study tracked what content was being shown to blank smartphones, registered as belonging to teenage boys. The results found that “masculinist, anti-feminist and other extremist content,” was shown to the accounts within minutes, irrespective of whether these accounts searched for “male supremacist-related content”, or not.
Today, The Irish Council for Civil Liberties (ICCL) has expressed “dismay” that the “toxic” algorithms which direct harmful content into the feeds of young social media users are not being covered by the newly updated draft of the Online Safety Code. Researchers and campaign groups alike have warned that these algorithms result in harmful content promoting things such as self harm and eating disorders appearing in vulnerable users’ feeds. While there were consultations about recommender system (algorithm) safety, this has not appeared in the final draft.
More widely, we’ve seen how the algorithms have been used to circulate far right content online. Dr Johnny Ryan, a senior fellow at the ICCL has said “We are every week hearing new scandals about the harm these systems cause to our teens by promoting suicide and self-loathing.”
“We are seeing how these systems turn communities against each other, which is so important now that we are facing an election across Europe and elections across member states. And we’re particularly dismayed because in this case, it is Ireland that was leading by reining in Big Tech.”
Elsewhere on District: Bad News Blue Razz Fans: Toxic Metal Found in Vapers’ Pee