
Artificial Intelligence has been regarded as a transformational tool for media practice. Conversely, the advent of different AI technologies has made the spread of misinformation easier, speedier, and cheaper. However, AI Technology also avails the tools which can be used to detect wrong information. For instance, Advanced AI-driven systems can understand language usage, interpret patterns, and context to support content moderation, verify facts, and detect misinformation. Tijani Mayowa, disinformation research expert, explains that when using tools such as Deepware which can be helpful in detecting AI-generated videos. “The result is not always 100% accurate. Sometimes it is hit or miss, depending on how good the fake is, what language it is in and a few other factors,” Mayowa explained.
Spot the Deepfake: The left shows a real image, while the right reveals an AI-generated face. Can you tell the difference?
Sam Reardon, who wrote on the impact of deepfakes on Journalism explains that the ability of AI Technologies to develop deepfakes such as convincing images, false text and video makes it hard to differentiate them from the real content. As people trust information from different sources without fact-checking, it can lead to the spread of false information. Incorporating AI into the media space has become inseparable as it has advantages and unique challenges. Before now, only a few people spread misinformation, but now, the availability of platforms such as social media and the continuous creation of AI tools have made it easier for people to spread misinformation.
As the media space continues to be flooded with false information by different perpetrators, Artificial intelligence can help sort false from right news. This is possible either by learning from previous claims that have been fact-checked or by learning through the recognition of patterns. For instance, as people get news and information from social media, SOSA has helped Brazil fight misinformation through strategic innovation and AI-driven tools. This has helped them in fostering more civic engagement. Also, through real-time monitoring, Bot and fake profile detection, language and intent analysis SOSA has been able to detect misinformation.
How Large Language Models(LLMs) help in detecting misinformation
The introduction of Large Language Models(LLMs) such as ChatGPT and Llama2 are making significant waves in combating misinformation. LLMs are trained on a large amount of information making it easier to flag misinformation. The potential application of LLMs lies in enhancing the persuasiveness and effectiveness of debunking responses. According to media researchers such as Canyu Chen and Kai Shu, combating misinformation using LLM was discussed based on three perspectives: LLM for misinformation detection, LLM for misinformation attribution and LLM for misinformation intervention.
The use of LLM for misinformation detection has been investigated by several researchers deploying methods such as prompting GPT-3, ChatGPT, and GPT-4. Meanwhile, the inability to provide up-to-date and adequate information has made some authors use augmented LLM for external knowledge or tools for misinformation detection. LLMs assist in detecting misinformation by analyzing language patterns and usage. This includes leveraging sentiment analysis to identify emotionally charged language commonly found in manipulative content while also cross-referencing the content with different sources for validation. AI also plays a crucial role in fact-checking news. For instance, Dubawa Chat Bot, an electronic mobile application that helps fact-check and verify claims in real-time through WhatsApp, the biggest platform for spreading misinformation, has effectively fought misinformation.
Challenges of using AI to detect misinformation
Despite the significance of the role played by artificial intelligence in media space and society, it also poses challenges to people and society at large. Advancements in technology must not lead to the compromise of essential ethical standards. Some of the challenges faced by AI-driven solutions are:
- AI can bring about bias based on the data it is trained on. This can lead to an inappropriate response.
- Risk of unlawful blocking of accurate content. This happens as AI is prone to flagging some content as fake when they are not.
- Disparity on who to judge what legal or illegal content should be. There is controversy about whether online platforms, government or judicial systems should judge the genuineness of the right content and removal of undesirable content.
Opportunities of using AI to detect misinformation
The role of artificial intelligence is significant in tackling misinformation. For instance, according to Facebook, AI tools are responsible for detecting the majority of harmful content on its platform: 99.5% of terrorist-related posts, 98.5% of fake accounts, 96% of content involving adult nudity and sexual activity, and 86% of graphic violence. These tools rely heavily on data provided by the company’s human moderation team for training. AI can help in combating misinformation. At Meta, AI has been effective in detecting misinformation by protecting people from harmful content. This includes adding warnings to content rated by fact-checking organisations. Collaboration with a fact-checking organisation has helped Meta take prompt action against harmful posts which can negatively affect people. Studies have shown that technology can enhance human decision-making in the detection of misinformation
Solutions to AI shortcomings
A single entity cannot solve AI’s shortcomings. Therefore, collaboration is needed to find solutions to AI’s challenges. The following are required as a way out of the AI challenges shared by Katarina Kertysova.
- AI-enabled platforms should show efficiency in verifying false and misleading content and also sharing fact-based countering messages.
- Show utmost accountability; there is the possibility of discovering biases in AI decision systems when scrutinised by auditing the data used to train a model. For instance, the Algorithmic Accountability Act of the United States requires businesses and companies to review their AI system and report any discrimination found against people, such as critical decisions that revolve around health care, financial services, housing and educational opportunities.
Artificial Intelligence (AI) plays an important role in the fight against misinformation. It can flag misleading information that can cause harm to people. To facilitate action towards curbing the menace of misinformation, there is a need for collaboration among stakeholders, government, LLM developers and media organisations to deploy effective solutions towards safeguarding ethical standards and public trust.