Social media—How much is too much policing?

Social media—How much is too much policing?

With more and more people relying on the internet to function, social media has fast become a necessity in our everyday routines. Ofcom revealed in its August 2018 communications market report that ‘one in five people spend more than 40 hours a week online’—with social media usage ranking number 2 on the list of internet activities, as well as half of all commuters surveyed saying they access social media more today, than two years ago, social media is completely embedded in our daily existence. More than ever, we use social media, not just for sharing our ‘highlight reel’ with the world, but for more sophisticated needs there has been a growing concern around regulation and privacy on these social hubs. This article reviews the reasons society is pushing for regulation and the intricate concerns wrapped within regulation over policing and breaching individuals’ rights to freedom of speech.

On 8 April 2019, the Department for Digital, Culture, Media and Sport (DCMS) released an Online Harms White Paper. The Paper includes an outline of new online safety measures which are intended to ensure companies are responsible for their users’ online safety. The paper also proposes the introduction of an independent regulator to hold companies to account for tackling online harms.

These recommendations follow the most recent social media tragedy in the case of 14-year-old Molly Russell, who took her own life in 2017. It was later it was found that her Instagram account contained ‘distressing material about depression and suicide’. Molly’s father Ian Russell believes the social media giant, Instagram, is partly responsible for her death. As well as attempting to tackle material that advocates self-harm and suicide, which came to light following Molly’s death, the paper also aims to prevent other ‘illegal and unacceptable content and activity’ online, such as:

• radicalisation and broadcasts by terrorist groups

• the use of disinformation through hostile actors and ‘echo chambers’

• the selling of weapons and drugs by criminal gangs

• harassment, bullying and intimidation—even if this behaviour is not considered illegal in all circumstances

Current social media regulation

At present social media companies, are left to self-regulate. For example, the BBC reported that Youtube uses machines, as well as 10,000 employees to monitor the content hosted on its website. In doing so it has taken down 7.8m videos between July and September 2018, ‘with 81% of them automatically removed by machines, and three-quarters of those clips never receiving a single view’. Facebook—who also own Instagram—have said it has ‘30,000 people around the world working on safety and security’ removing 15.4m pieces of violent content between October and December 2019. Though these are good examples of self-regulation, the current model still raises many issues—some of the most notable, as raised by ARTICLE 19, being: Can social media companies actually keep up with regulating the sheer volume of content uploaded onto their websites? Who should be liable for the illegal content published online? And how can you regulate the publishing of ‘hate speech’ without adding restriction on freedom of expression and breaching human rights?

As it stands, the DCMS recommends the following proposed powers for the regulator, which include:

• issuing substantial fines

• imposing liability on individual members of senior management

• requiring annual transparency reports—outlining the prevalence of harmful content on platforms and what counter measures are being taken to address these

• requiring additional information—including on the impact of algorithms in selecting content for users and to ensure companies are proactively reporting on both emerging and known harms

However, there is not universal support for the reform.

Opponents hold that in keeping social media unregulated, freedom of speech is better protected. If the government was empowered to regulate social media, they argue, that it could also begin to regulate speech in a political manner, that valued one mode of moral code over the other. For example, could a Labour government regulate Conservative content? Further, government watchdogs, whistleblowers, and organizations, for example Wikileaks, could be restrained as the government begins deciding what is ‘fake’ and what is ‘real’ news. In short, many in this camp maintain that regulating social media would result in breaching freedom of speech violations, without producing any returns in the reduction of fake news.

However, in the wake of social media crises all over the world, some advocates have argued that that the law is not adequately equipped to curb the surge of social media and respond to the growing implications on communication and safety. Social media has raised a lot of concerns such as people’s loss of privacy, the rising era of cybercrime, cyberbullying and terrorism and a threat to the structure of democracy.

While there is merit in both arguments, I would take a more neutral viewpoint, however. I believe in a concept called coregulation—some regulation and the enforcement of existing legislation is necessary to strike the balance between promoting free speech and protecting public interests. However, there should still be a distinction between government and its interference with activities on social media, to ensure that freedom of speech is sustained.

Related Articles:
Latest Articles:
About the author:

Hannah is one of the Future of Law blog’s digital and technical editors. She graduated from Northumbria University with a degree in History and Politics and previously freelanced for News UK, before working as a senior news editor for LexisNexis.