In the December of 2018, the Ministry of Electronics and Information Technology (MeitY) floated a draft set of amendments to the intermediary liability guidelines [“the 2018 draft rules”]. These amendments were made by the power vested within the government by virtue of Section 79 of the Information Technology (IT) Act of 2000, which is the basic ‘intermediary liability’ section in India.
This draft was open for public comments, and sought to significantly change up the way we interact with the internet and its many gatekeepers and gate-aways, i.e., search engines, social media platforms, blogs, forums, payment aggregators, etc. etc (all encompassed under a broad definition of ‘intermediary’ within the IT Act). Since then, a significant amount of discussion has been had on many of the aspects of the 2018 draft rules, specially the ones that raised more serious concerns for freedom of expression online.
Below, I list some resources and pieces I had authored/co-authored, that might give a glimpse of these discussions. Plus, the new ‘social media regulation’ rules notified government on 26th of February, are essentially an update on the 2018 draft rules. The older discussions therefore, hopefully, offer a nice segway into understanding what changed this time around.
(Note: These resources all cluster around issues of content moderation and freedom of expression online, since these are the issues I predominantly work in. I subsequently intend to create another list, with due permission from respective authors, that fill in the gaps in this list.)
- What had the 2018 draft rules actually said? First up, we have a quick rundown of the older rules, via this blog-post for CyberBRICS. It touches base on most of the changes that the older rules had introduced, throws up some questions for the future of intermediary regulation, and ends on a cautiously optimistic note.
- Reduced takedown timeframe: In the 2018 draft rules, the government had suggested that these internet platforms had 24 hours to respond to content takedown notices, as opposed to the existing 36 hours timeline. I had authored a research paper which looked at why this might not be the best idea, and how having a shorter compliance timeline might affect the way intermediaries treat our rights, and the way business would be skewed towards the already giant incumbents. I also summarized these findings in a shorter blog-post, which originally appeared in the CyberBRICS website.
- Utilization of artificial intelligence: One of the more prominent changes that the draft rules had suggested for the intermediaries, was the mandatory use of artificial intelligence, or machine learning tools to filter out broadly ‘unlawful’ content. Shweta Mohandas and I wrote a blog-post for the RGNUL Student Research Review (RSSR), where we went over the advantages and disadvantages of mandating the use of such technologies, and fledged out some recommendations that might make the overall obligation better.
Broader questions and fun things
- Philosophies of content regulation: This article for the Seminar Magazine, co-authored with Arindrajit Basu and Karthik Nachiappan, takes a more bird’s eye view of the issue surrounding content regulation and intermediary liability in general. Not only do we look at the Section 79 issue, but we also touch base on Section 69A of the IT Act, which has been in the news in the past for being a ready-made tool for censorship (I intend to do a separate discussion post for Section 69A). Additionally, we look at global responses to intermediary liability, and flesh out troubles with our existing assumptions and approaches towards this issue.
- Transparency Reporting: In the past one-and-a-half-year, I have become a very vocal proponent of the proper utilization of transparency reports in understanding the length and breadth of government censorships and social media enforcement of community standards. Gurshabad Grover, Suhan S, and I developed a methodology to analyze the data produced by these intermediaries in the Indian context, which you can find here. We also participated in the consultation for the Santa Clara Principles, which are a stand-in for international best practices in transparency reporting. Additionally, I wrote a blog-post, arguing that there’s much that these reports continue to offer for researchers like us. Especially in times of the pandemic, when platforms are rapidly removing a plethora of misinformation, open and transparent data sharing on these removals can tell us much about the causations between problematic content online and regulatory responses.
On the last point, it is interesting to note that while the 2018 draft rules did not have any provisions for legally mandated transparency reporting, the new 2021 rules does! I intend to analyze them in more detail soon.
In the coming posts, I will be highlighting the changes with the 2021 rules, so keep an eye out for that!