Government regulators walk a fine line between online safety and censorship

Queens Speech

On the 10th of January, the government of Ireland published the draft Online Safety and Media Regulation Bill, which Irish Communications Minister Richard Bruton heralded as “the start of a new era of accountability”.

The bill hopes to tackle what tech giants such as Facebook are failing to do – making the internet a safe space for all.

Children and other vulnerable groups are continuously exposed to harmful content, fake news is on the rise, election integrity has been compromised on both sides of the Atlantic and the private data of millions is susceptible to falling into the wrong hands.

To address these issues, Dublin wants to take matters into its own hands. Ireland’s draft bill recommends appointing an online safety commissioner, part of a new Media Commission who will be responsible for ensuring that online services such as social media platforms, forums, and e-commerce follow the stipulated safety guidelines. Failure to comply can result in fines and the shutdown of the site in question.

The Irish government is not the first and will not be the last to introduce such legislation, considering that Australia introduced a bill imposing criminal liability on social media executives for failing to control ‘abhorrent content’ just last year. In April 2019, the UK Home Office’s department for Digital Culture Media and Sport also published the draft of the most comprehensive legislation yet, the Online Harms Bill.

However, at a time when the need for better regulation of cyberspace and the imperative to preserve users’ freedoms are in constant battle, it is no surprise that Ireland’s move has caused a stir. It is unclear what methods and parameters will be used by regulators to moderate online content and how they will do so without restraining online debate and killing technological innovation – a key requirement of the 21st century.

Can innovations in self-regulation replace government regulation?

Perhaps if tech companies took it upon themselves to better regulate the content on their sites and apps, blanket government regulation would not be necessary. But the fact of the matter is that tech giants such as Facebook, Twitter and others have thus far struggled to hit the nail on the head.

At the same, new and rising apps have learned the necessary lessons from the struggle of Facebook and other apps and sites of the old guard, and have successfully made moderating their content – and a safe-user experience – their unique selling point.

An example is MeWe, an app whose motto is ‘like Facebook, but with privacy’. At the core of its operations is a Privacy Bill of Rights ensuring that users’ data is not compromised. By stringently protecting the personal data of users, disallowing advertisements or content manipulation, the app is gaining popularity.

Another app, Yubo, is leading the way in developing comprehensive algorithms to effectively detect breaches of its community rules. A live chat streaming app, the French start-up is geared towards the youthful Gen Z audience.

To tackle issues like nudity, trolling and misogynistic language, it uses a mixture of both technical and human resources. Taking screenshots of lives every 2 seconds and analysing it with its algorithm, the app can quickly react to violations of community standards by intervening in real-time and sending the user a message about the violation.

Yubo intentionally refrains from doling out instant and severe punishment to it young users. Instead, it aims to not just reduce harmful content online, but also to educate its target group – teens and young adults on what is acceptable and what is not. It thereby creates a long-lasting positive effect on cyberspace carried forward by the Gen Z generation.

Innovation or blanket regulation?

These examples highlight that there is a growing pro-privacy and pro-safety trend in the new wave of social media. And such up and coming platforms are using the latest innovations in artificial intelligence and machine learning. AI based content moderation systems that use machine learning can be ‘taught’ to detect harmful content online and reduce the need for human regulation.

Known as Viztech, this sector has the potential to not just identify problematic content but also understand more nuanced texts and images like humans do. As such systems develop further, they should be made accessible to online platforms of all sizes.

This opens up the scope for better self-policing of online services and reduces the need for government interference while promoting socially positive online engagement.

Another positive aspect of a more tech-based approach is that it reduces the risk of over-compliance, where apps and websites pre-emptively block more content which might or might not have been harmful, to err on the safer side.

Threatening to block sites which do not comply with the specified regulations, is a move which is not too far-flung from state-sponsored censorship. The Queen’s Speech of 2019 spoke about making the UK the safest place online while at the same time maintain its attractiveness for technology companies to operate in. If the internet is controlled in such a manner, to what extent these two goals will be mutually compatible remains to be seen.

While there is increasing support for greater regulation online, people are also apprehensive of the potential curbing of their rights associated with such measures.

By encouraging technological innovations, content moderation procedures can be made more transparent and have a mechanism to prevent centralisation of power in the hands of the government.

Next-generation apps and online sites like Yubo and MeWe are presenting a strong case for considering self-regulation over a government wielding its iron hand.