Digital Platforms, Information Disorder Proliferation and the Connecting Nexus

The increased adoption of digital technology and the rise of Artificial Intelligence has flooded the information landscape, in what experts described as ‘information overload.’ Evidently so, the Global social media statistics report shows that there are 241 million new users from 2024, which is a 4.7% increase at an average of 7.6 new users every second. 

The increased utilisation and mainstream acceptance of digital platforms can be attributed to several key factors. Firstly, the COVID-19 pandemic acted as a significant catalyst, forcing governments, businesses, and individuals to shift towards online solutions for communication, transacting, and other daily activities. As people adapted to remote work and social distancing, reliance on digital platforms surged. Additionally, government endorsement has played a crucial role. Many governments worldwide have promoted digital transformation initiatives, providing resources, support, and infrastructure to encourage technology adoption.

Lastly, affordable access to technology has contributed significantly to this trend. This democratisation of access has welcomed a broader audience to utilise digital platforms. Digital platforms enable widespread content creation and sharing. Each time users interact with these platforms, they share information about themselves or consume content from others. As a result, these platforms are prone to spreading misinformation, disinformation, and fake news, primarily due to their popularity and viral nature.

According to Silas Jonathan, the Digital and Lead Research Manager at the Centre for Journalism, Innovation and Development (CJID), before the broad adoption of these platforms, misinformation/disinformation and its spread were limited to some persons, but with digital democratisation the audience net widened and also people are now empowered to share disinformation in various format outside text and words, to videos, audio, images, etc. 

More concerning is the increase in misleading narratives during political elections and crises. For instance, during the onset and peak of the COVID-19 pandemic, social media emerged as a crucial medium for disseminating information regarding the virus, particularly during lockdown periods. In addition to the vital messages about the health crisis that educated people on how to prevent catching the virus, these platforms were also used to raise panic regarding the virus, as they were false narratives about herbal cures, bathing with salt water, etc. 

In like manner, digital platforms also serve as essential tools for political campaigns, providing a means for real-time updates from polling units during election periods. However, these platforms have also been misused, with users sharing misleading information about unverified voting results, political alliances, and other related content during elections.

Little or No Entry Barrier

Because of their low or no barrier to entry, most digital platforms, including Facebook, Twitter (X), TikTok, Instagram, Telegram, and WhatsApp, are dominated by disinformation actors with multiple accounts who drive a given false narrative. Scholars have identified Facebook and Twitter as the hubs of fake news. Digital platforms also allow for bot accounts. Bot accounts are automated profiles designed to mimic human users on digital platforms. These accounts are often programmed to perform specific tasks, such as spreading information, amplifying certain narratives, or engaging with other users. While some bots serve legitimate purposes, many are used maliciously to contribute to information disorder. Their ability to evade detection through sophisticated programming further exacerbates the challenge of mitigating their impact.

Silas Jonahtan described this as the “ease of opening social media,” unlike opening a bank account, which requires several processes and documentation. This has created the problem of fake accounts misleading people.

Content Creation Incentive

The creation, duplication and scheduling of content have been made easier with the democratisation and integration of artificial intelligence and other digital tools, these digital amplification have incentivised the creation of false content in diverse formats, such as fake text, images, and even audio and video deepfakes, to spread false narratives at little or no cost and with profound ease. Examples include generative AI tools like ChatGPT, Deepseek, and Google Gemini. Scheduling tools for content dissemination include Doodle, Calendly, Hootsuite, etc.; these tools help users schedule social media posts in advance. Silas indicated that these content creation features of digital platforms enable anyone without prior knowledge of writing or reading to share misinformation without bottlenecks and in several forms. In his words, this “diversity and openness of social media contribute to the spread of misinformation.”

The Role of Algorithm

With the adoption of AI and algorithm amplification on digital platforms, content reach is unlimited. Disinformation actors who usually could only reach five people in their residences can now reach millions from the comfort of their homes. These algorithms often prioritise content generating high engagement, such as likes, shares, comments, and quotes, over factual accuracy. 

Worse, users are sometimes trapped in a web of information through personalised feeds. These personalisations are based on users’ behaviour, making them less likely to encounter diverse viewpoints or fact-checked information. Further, malicious actors can strategically exploit algorithmic weaknesses to promote misinformation, using clickbait titles and emotionally charged imagery to gain visibility.

Financial Incentive Programs

The monetisation program of content on most digital platforms has also incentivised people to spread false information for financial gain. This DAIDAC research found that Twitter’s ad-revenue sharing program on X (formerly Twitter) contributes to the escalation of disinformation. The findings indicate a correlation between the ad-revenue program and increased disinformation spread, stressing the need for strategies that engage users and uphold the platform’s authenticity. 

Way Forward

Digital platforms have made efforts to reduce the spread of misinformation, but false narratives continue to thrive within these spaces. For instance, in 2023, Twitter introduced the Community Notes feature, which aims to encourage users on X to collaboratively add context to potentially misleading posts, thereby creating a more informed environment. However, although this feature has demonstrated a high accuracy rate of 97.5%, only 8.3% of proposed notes actually become visible. This limited visibility restricts their effectiveness in combating misinformation, likely due to low user engagement and hesitance to interact with the notes, which may be caused by uncertainty regarding the credibility of contributors. The previously mentioned Twitter research analysis discovered that 80% of the sampled 100 tweets flagged by independent fact-checking organisations from June 2023 to March 1st 2024, remained accessible online. 

In response to the issue, Silas Jonathan shared strategies from his experience in digital investigations, emphasising the importance of consistently monitoring social media for misinformation and disinformation. He highlighted that understanding “who is saying what, where, and when” can facilitate quick responses to counter disinformation. Additionally, he called for improved collaboration among digital platform owners, third-party fact-checkers, and the government to establish a system of direct punishment for individuals who repeatedly exploit these platforms for personal gain.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top