On 24 August 2024, Pavel Durov, co-founder of Telegram messaging service and Russian social network VK, was arrested after landing at Le Bourget Airport, France. The arrest was widely reported as part of a preliminary investigation by the French National Judicial Police over allowing criminal activity on the platform. A few days later, Durov would be released. A few weeks after that, Telegram would announce a new crackdown on illegal content involving removing more “problematic content”, taking a more proactive approach to complying with government requests, and replacing the ‘People Nearby’ feature with ‘Business Nearby’ in a bid to promote only legitimate businesses while reducing illicit activities. The chain of events refuelled was what it all means for Social Media companies or ‘Big Tech’ in the context of content moderation.
No sooner had social media been invented than the questions of moderation and accountability began to emerge. This is unsurprising because every area of human social interaction requires ground rules or norms to function correctly, whether implicit or otherwise. Within these rules and norms are assigned responsibilities and systems of accountability in place in the very likely event that these rules or standards are broken or responsibilities are neglected.
In the case of social media platforms, the conversation is much more complicated for several reasons. First, it involves multiple societies, cultures, states and jurisdictions. Second, it is fraught with competing perspectives and concerns, some of which include questions about the platforms’ responsibility for speech made on the platforms, users’ rights to privacy and safety, and the acceptable approaches to enforcing them.
Specifically, in the debate regarding content moderation and platform accountability, the conflict between privacy and safety is a major concern, especially on closed platforms (such as social media networks, messaging applications, and other online spaces where user access is limited). These platforms face the challenge of balancing user privacy with ensuring safety by moderating harmful content. However, to better appreciate the dilemma, it is crucial first to understand the terms ‘content moderation’ and ‘platform accountability’.
Content Moderation and Platform Accountability
Content moderation, according to the Trust & Safety Professional Association (TSPA), involves examining user-generated content posted online to make sure it complies with the policies or guidelines set by the platform on what can and cannot be shared. These policies are often regarded as “community standards.” Depending on the scope, sophistication, and level of abuse, as well as the overall operation of a platform, the process of content moderation and policy enforcement can be either manual by humans or automated, or a combination of both. On the other hand, platform accountability, also known as platform responsibility, according to the Association for Progressive Communication, emerged in the context of the expanding role that social media platforms play in managing, moderating and curating online content. It refers to digital platforms’ ethical and legal responsibility to protect the safety, accuracy, and fairness of the content published through their services. The concept emphasises that platforms are more than just passive information conduits; instead, they are active participants in creating public discourse and culture, and they must accept responsibility for the content they host and promote.
The Uniqueness of Closed Platforms
Unlike open social media platforms like Twitter/X, Facebook, YouTube, TikTok, and Instagram, where shared content is readily publicly available, content moderation, although still challenging, does not brush up against the question of privacy, as is the case with closed social media platforms.
Closed platforms (WhatsApp, Telegram, Signal, Discord) often emphasise user privacy, promising secure communication and data protection. For example, end-to-end encryption (E2EE) in messaging apps like WhatsApp or Signal ensures that only the sender and recipient can read messages, preventing even the platform itself from accessing the content. This level of privacy is crucial for protecting users from surveillance, data breaches, and unauthorised access.
However, this privacy can also create challenges for content moderation. When platforms cannot access user communications, they are unable to proactively detect and remove harmful content such as hate speech, misinformation, or illegal activities (such as child exploitation, terrorism). This limitation has led to criticism that closed platforms enable harmful behaviour by prioritising privacy over safety.
Safety on digital platforms requires proactive measures to identify and remove harmful content. Open platforms (e.g., Twitter, Facebook) often use automated tools and human moderators to scan and filter content, but closed platforms face unique hurdles due to encryption and privacy protections. The lack of visibility into user communications on closed platforms can make it challenging to address issues like information disorder, hate speech, harassment, and other illegal activities.
The Balancing Act: Privacy vs Safety in the African Ecosystem
The core dilemma lies in finding a balance between protecting user privacy and ensuring platform safety. Critics argue that weakening encryption or increasing surveillance to enable content moderation undermines the fundamental right to privacy. On the other hand, advocates of content moderation warn that failing to moderate harmful content can lead to significant societal harm, raising questions about platform accountability.
In many African countries, where personal freedoms and democratic rights are often under threat, the question of balancing privacy and safety is even more complicated. One challenge stems from undemocratic governments creating repressive laws aimed at curtailing freedoms, which they mandate social media companies to adhere to.
For example, Ethiopia’s Hate Speech and Disinformation Prevention and Suppression Proclamation of 2020 mandates that platforms control content by providing them 24 hours to remove disinformation or hate speech. Similarly, Nigerian lawmakers proposed a bill in 2020 on disinformation that would also place undue pressure on platforms to police content. That bill has since been abandoned after public protests and outcry.
In addition, Social media corporations’ lack of transparency in content moderation choices might worsen already high political tensions. Going even further, their refusal to take responsibility for the harmful effects of their inaction or negligence on their platforms. In 2023, the New Internationalist reported that Meta, the proprietors of Facebook, faced legal action in Nairobi, Kenya, for the company’s role in a conflict situation. Petitioners in the case argued that the business encouraged posts that resulted in ethnic violence and killings in Ethiopia because of how Facebook’s algorithms recommended content. Thousands of civilians were killed in Ethiopia over the two years preceding the lawsuit in different conflicts, the most prominent of which occurred in the Tigray region.
According to this DAIDAC article by Akintunde Babatunde, Executive Director of the Centre for Journalism Innovation and Development (CJID), at the heart of the conversation is “the question of digital sovereignty. Countries like Nigeria, Brazil, and Kenya are asserting their right to regulate digital platforms within their borders, raising important questions about whether digital platforms should be subject to the laws of individual countries or operate within a more globalised, self-regulated framework”.
In the end, a more nuanced strategy is needed to secure a rights-based model for content management in African countries. Several regulatory models have been proposed, including legislation, self-regulation, and co-regulation (which is effectively “self-regulation with a regulatory backstop”). However, the multistakeholder model has been the most popular, as it seeks to increase engagement in content moderation regulations at a more granular level while holding platforms accountable for their role or neglect of their responsibilities.