No, but how do you regulate social media platforms?

Torsha Sarkar
4 min readNov 25, 2021

--

A statement of problems.

The last decade has shown us a pattern in the way governments around the world have attempted to regulate speech in the digital space. Every few years, the spotlight is thrown at a particular instance of human rights violations stemming from the incessant proliferation of misinformation and hateful narratives on digital platforms. Spurred by political pressure, or acting out of political interests, governments propose laws and regulations that are reactionary at best, with a gamut of questionable liability frameworks and overly strict sanctions. Social media platforms denounce both the instances of violation and the legislation, all the while maintaining that they are not ‘arbiters of truth’ and should not be held liable for acts of their users. Rewind. Repeat.

However, none of these laws seem to account for two fundamental truths about the nature of the internet today and the way we interact with it. Firstly, as online users, our relationship with these platforms is manifestly unique, that cannot be regulated by swinging the pendulum of liability either way. Today, our regular interaction with harmful information online should not be merely attributed to individual users (or concerted group efforts), nor should we consider ‘fake news’, ‘hate speech’ and ‘conspiracy theories’ as singular, isolated phenomena. Instead, it would be useful to consider these experiences as functions of the platform’s design choices, which create a structural tendency to prioritize viral, extreme content that will inevitably pinball into crises.

Secondly, the current approach of regulators towards online platforms looks to be ‘either/or’ : “either retain your safe harbour by fulfilling the conditions of your immunity”, or “be liable for the content of your users.” The former approach ignores the evolution of the nature of these gatekeepers from ‘dumb conduits’ earlier, where they were expected to transmit content without exerting any real control, to centralized norm-setters today, where they actively contribute to hide or remove content, algorithmically curate newsfeeds, add labels and warnings to content, and more. The rhetorics of the ‘safe harbour’, which was a doctrine largely adopted from the European Union E-Commerce Directive, twenty years ago, for information society providers, may not reflect the current reality of social media platforms. On the other hand, the latter approach, of holding these gatekeepers liable for the content or behaviour of users can have a series of unintended consequences, including skewing the power dynamics between the user and the platform in favour of the latter, since the threat of legal sanctions would dis-incentivize the platform from preserving user rights in any forms. And let’s be real here, the larger social media platforms don’t care about individual users that much either ways.

Lastly, there continues to be substantial opacity regarding the decision-making processes behind these social media platforms, along with a clear lack of ways to ascertain fairness and accountability of the same. As a result, application of a social media platform’s internal rules of governance, especially with relation to moderation of speech on the platform, is often ad-hoc and constantly in danger of being captured by extraneous and potentially damaging considerations.

Some alternatives

It is difficult to propose regulatory mechanisms that can counter these harms head-on, especially at a time when social media platforms are being weaponized like no other. One way of working around this is by reconsidering the norms of content governance that we have taken for granted, and framed our regulatory decisions around it. For instance, ‘safe harbour’ protections all around the world were underpinned by the assumption that platforms did not, or should not have to control the content on their platforms. Today, we know that this is not entirely true.

So then, is it possible to shift the basis of these conditional immunities, from content/speech, to something else? I would argue that one alternative to a content-specific governance framework, would be a behaviour-specific framework. Standards should be formed around the experiences and services provided by the gatekeepers, and enforcement should be focussed on how well do gatekeepers execute these standards, much akin to the bright-line standards for net neutrality. For platforms, this would involve fulfilling conditions both at an operations level, as well as at a meta-level.

Operations-level conditions would involve adhering to the principle of providing a ‘safe’ experience for users of the platform. This can then be distilled into smaller, more tangible aims — including giving greater control to users about the kind of content they want to see, more robust ways of responding to user complaints, and so on. Prioritizing social media design towards giving more control to users may also allow a more objective way of dealing with harmful, hateful content, and a way out from the usual troubles regulators run into while governing speech.

On the other hand, meta-level conditions for platform governance should adhere to principles of diversity, accountability and transparency — distilled to form stricter requirements for regular human rights audits, disclosures about the kind of government-facilitated or underhanded censorship and data sharing done by platforms, and recognizing the need for having diverse moderation teams to ensure user safety.

As indicated earlier, much of our laws, regulations and assumptions surrounding internet governance are derived out of the memory of an internet that looks much different today. Not only has the internet lost much of its earlier decentralized nature, leading to deepened gatekeeping tendencies, but governments around the world have also resorted to more authoritarian, speech-restricting, policy decisions, all tumbling into a crisis for user-freedom on one hand, and greater offline harms on the other hand. It is high time that our conversations of solutions and regulations also shift to reflect this new reality, and evolve into a rights-respecting framework in the long run.

--

--