Applying Human Rights Standards in Content Moderation on Social Media

 

In increasing numbers, we use social media platforms to find the information and ideas that structure the agenda and the content of public discussions (Newman et al. 2019). Social media giants have risen to the top of their industry and now control many of what users see, hear, or read regularly. Content moderation and distribution, i.e., the composition and accessibility of social media content, is done by a combination of human and algorithmic decision-making processes. However, in general, current practices are not transparent and offer little recourse to users whose content has been removed or demoted.

This has become an important issue for democratic societies. In legislative, policy, and academic circles, the responsibilities of social media giants are being discussed. However, many initiatives don’t adequately account for freedom of expression or other fundamental rights. International experts on freedom of expression are unanimous in their belief that regulating speech through contracts (a company controlling its platform based on terms of service or community standards) does not provide sufficient transparency or protection of freedom of expression or other human rights. Content moderation obligations in legislation, such as the German Network Enforcement Act, tend to create systems where private actors must apply criminal law and national legal provisions within short deadlines with the threat of heavy fines. These systems fragment the legal obligations of social media companies and leave users with little recourse to stop hasty removal of content.

This has become an important issue for democratic societies.

The media landscapes, and the diverse roles played by tech companies, have evolved at an accelerated pace. This will continue. Democracy now demands that we engage in an ongoing collective learning process in order to organize online content moderating in a way compatible with international standards for freedom of expression. In this context, it is becoming increasingly apparent that a public monitoring system for social media content moderation is needed.

ARTICLE 19 – a global leader in free speech – has proposed to create the “Social Media Council,” a model of a multistakeholder accountability system that would allow for an open, transparent, and independent forum on social media platforms, addressing content moderation on the basis international standards of human rights. The SMC model takes a voluntary approach to the supervision of content moderation. Participants (social media platforms and all stakeholders) sign on to a system that doesn’t create legal obligations. The SMC model relies on the voluntary compliance of platforms who, by signing up to it, agree to follow and implement in good faith any decisions or recommendations made by the SMC. David Kaye, UN Special Reporter, endorsed this proposal in April 2018, when he recommended that “all segments of ICT that moderate content or act a gatekeeper should make the creation of industry-wide accountable mechanisms (such a social media Council) a priority.” (UN General Assembly 2018 para. 72).

ARTICLE 19 originally envisioned the SMC with an ambitious scope. A network of national and regional SMCs entrusted to provide general guidance to social networking platforms and decide individual complaints brought forward by individual users. They would operate on the basis of international standards for human rights while coordinating via the mediation of an International SMC. Such multi-stakeholder, transparent, accountable, and independent fora could weave freedom of expression within all aspects of online content moderation and distribution across all social media platforms, from integrating international standards in decisions to delete or demote content to ensuring exposure to the broadest possible diversity of information and ideas through a form of human-rights-optimized algorithmic distribution.

The SMC model proposes a voluntary approach for the supervision of content moderation.

ARTICLE 19 submitted the proposal, along with the UN Special Rapporteur for the Promotion and Protection of Freedom of Opinion and Expression and Stanford University’s Global Digital Policy Incubator, to a meeting of academics and civil society organizations, and social media companies. This led to intense discussions, as recorded in the conference report (Global Digital Policy Incubator ARTICLE 19 & Kaye 2019). The conference, and subsequent meetings, have shed light on questions big and small raised by the creation of an SMC. For example, see the comments made by the Electronic Frontier Foundation in McSherry 2019. Different visions exist about the roles and functions that this new mechanism will play, the location where it should be located, or the interaction with other initiatives such as Facebook’s creation of an oversight panel (ARTICLE 19, 2019).

The first topic of discussion should be the rules that will govern the moderation of content. There is growing agreement that the international standards for human rights are a universal legal framework. However, different approaches to applying this set of rules may exist. The SMC can refer directly to these rules, and the authoritative interpretations by international and regional courts and special mechanisms will provide all the necessary guidance for the SMC to inform its decisions. A code of principles relating to human rights could be adopted for the purpose of content moderation. Adopting a code that adapts international standards for online content moderating would allow the SMC to operate under more strict guidelines than merely referring to international standards.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *