
In what may be considered one of the most controversial moments of the 2023 Nigerian presidential election, we saw Peter Obi’s alleged phone call with Bishop David Oyedepo, the founder of Living Faith Church worldwide. By referring to the election as a “religious war,” Peter Obi, although claiming that the statement was taken out of context, was alleged to be canvassing votes based on religious sentiments in the audio clip. While some netizens overwhelmingly believed the audio was real, several others, including ardent followers of these political candidates, stood to defend and allege strongly that the audio was AI-generated.
Unlike the Peter Obi controversial audio, technological experts have confirmed the case of a viral voice recording which purported the former Vice President, Atiku Abubakar, and his running mate for the 2023 Nigeria Presidential election, Governor Ifeanyi Okowa, to have been planning to rig the February 25 elections to be fake and generated using sophisticated AI software.
Peter Obi and Atiku Abubakar are not alone in such a situation. There have also been cases of politicians worldwide who have dismissed potentially damaging pieces of evidence, such as audio receipts and video footage, as AI-generated fakes simultaneously. However, there have also been genuine recorded cases where AI is used to spread misinformation.
An example of a global instance where people dismissed media accusations of AI manipulation occurred when Donald Trump dismissed an ad on Fox News. The ad captured his well-documented public gaffes, including his struggle to pronounce the word “anonymous” in Montana and his visit to the California town of Paradise, which he inadvertently referred to as “Pleasure.” The Washington Post article also documented this case.
These examples demonstrate that the emergence of new AI capabilities not available a few years ago and the growing concern about the potential misuse of deepfake and generative AI technology has created a grey area where claims can be disputed without clear evidence of truth.
Beyond the political sphere, sophisticated voice-cloning programs can replicate a person’s voice using only a brief sample from a family member, which can be extracted from online content. According to Colbin, a Media Insider contributor, this technological advancement has made it increasingly convenient for individuals to refute potentially damaging audio or video recordings.
Recently, AI has come under scrutiny for potentially enabling a “liar’s dividend.” And what may be described as causing “multiple effects” implies that as awareness of these instances grows, political and non-political actors may increasingly utilise AI-generated content as a basis for denial.
The Truth and Liar Dividend
Silas Jonathan, a digital media research expert at the Centre for Journalism, Innovation and Development (CJID), said the popular saying “seeing is believing” is now being challenged. He explained that before now, “what we know as truth is something that when people see, they believe in, but the concept of deepfake challenges this spectrum, as now what we see we do not believe”.
One of the challenges faced is that “it implants doubt in people’s minds that they no longer believe in what used to be the truth; some years back, the factual truth of evidence to prove someone’s actions was a video, but now because of deep fakes, even that can be questioned,” he added.
Secondly, he pointed out that people hide behind the truth to call it deep fakes, hence plausible deniability. With deep fakes questioning what we see, individuals, such as politicians, may capitalise on it, as they sometimes benefit from misinformation to save themselves from ruining their reputation or helping them win an election.
Permission Structure to Deny Reality
On June 5 2024, thousands of ultranationalist Israelis held a parade through the occupied East Jerusalem as part of an annual ‘Jerusalem Day’ celebration. During the course of this parade, a part of this crowd was reported by Al Jazeera to have attacked a Palestinian journalist, Seif Kwasmi. The report was accompanied by several photos and videos from different angles and by different sources. Yet, by the time these images were shared online on X, some accounts, in a bid to discredit the incident, tagged the image as AI-generated. An account with over 2,000 followers went as far as sharing a screenshot of the image being verified on an AI-detection website to back up the claim in a post that was viewed more than 20,000 times. Meanwhile, the incident did, in fact, occur, and multiple media sources reported it as well as the following events, which saw the journalist detained by Israeli authorities.
This incident raises the question of how much we can trust AI Detection tools which journalists, researchers and members of the public have come to increasingly rely on for the verification of online media content in a bid to counter AI-driven disinformation on digital platforms. Eliot Higgins, journalist and founder/creative director of Bellingcat, a Netherlands-based world’s largest citizen-run intelligence agency and investigative journalism group that specialises in fact-checking and open-source intelligence, responded to the claim by pointing out, “this is a real image that is being dismissed because a crap AI detection website that doesn’t actually work, because AI gives people a permission structure to deny reality.” In an interview he gave to the US-based tech news platform, WIRED, Higgins explained: “When people think about Al, they think it’s going to fool people into believing stuff that’s not true. But what it’s really doing is giving people permission to not believe stuff that is true.”
What is Permission Structure?
It was popularised by former US president Barrack Obama’s administration. Before then, the term had primarily been used among people in the marketing, PR and advertising industries. It refers to, according to Model Thinkers, “an emotional and psychological justification that allows someone to change deeply held beliefs and/or behaviours while importantly retaining their pride and integrity.”
It is predicated on the knowledge that drastically changing a strongly held belief and/or ingrained behaviour tends to question a person’s sense of self and may even make them feel ashamed about their flawed beliefs.
Therefore, permission structures provide a framework that allows people to accept change that they may otherwise oppose. Alternatively, they may provide rational cover for a person to maintain an erroneous belief without feeling ridiculous or unreasonable.
A well-designed permission structure facilitates a person’s transition to a new viewpoint in a way that makes sense and is in line with their preexisting beliefs or helps to solidify an existing one without thinking of one’s self as silly or deluded. In the context of the example of the Palestinian journalist, the X users claiming the images to have been AI-generated would rather believe the spurious result of a questionable “AI detection” website than be confronted with the facts that the event might have actually happened, and as a result share false information online widely by discrediting the veracity of a news report of a verified event.
Factors that make Deniability Permissible in the age of AI
- Increased Awareness: As society becomes increasingly informed about the advancements in AI technology and the new possibilities that arise, people are also developing a higher level of scepticism towards all media content, including text, images, videos, and audio. This scepticism has created opportunities for individuals to exploit doubt and avoid being held accountable for their statements or actions.
- Commonalities of AI capabilities: Verma and Vynck have reckoned that one of the reasons plausible deniability has been permissible is because deep fakes regularly go viral on social media platforms such as X, Facebook, Instagram, etc. However, the tools and methods to identify an AI-created piece of media are not keeping up with the rapid advancement in AI’s ability to generate such content. Even some known ones have been seen to fall short or biased in reporting on AI-generated content.
- Information Overload: In a world where digital information reigns supreme, we are bombarded with an endless stream of legitimate and questionable information daily. The chaotic nature of this digital ecosystem makes it nearly impossible sometimes to discern the source of information, allowing individuals to perpetuate their actions and avoid accountability behind a veil of obscurity.
- Biased Supporters: Political and ideological bias is prevalent in political or religious circles. In these groups, supporters often tend to believe denials from their preferred candidates or individuals. This behaviour leads to a selective application of accountability, fostering an environment where biases influence judgment and decision-making.
- Legal or Regulatory Shortcomings: AI-generated content can be exploited to avoid accountability due to shortcomings in current laws and regulations. These are instances where existing legal and regulatory frameworks must adequately cover or address these new possibilities.
What, then, is the implication of Deepfake and other AI capabilities Awareness
- Threat to Word Stability: Within AI advancement and its increasing potential to amplify fake news or disinformation, AI-generated propaganda and lies pose a real threat to our world stability, said the Swiss President, Viola Amherd, in a conference of world leaders and CEOs in Davos, Switzerland held in January 2024
- Erode Trust and distorts reality: Fabricated content tends to distort reality and manipulate public perceptions, said Neil Jacobson on Open Fox. He further explained that fabricated statements or actions cause political, non-political and everyday individuals to be at the mercy of false narratives.
- Obstruction of Justice: An alleged media tendered as evidence in court, if denied based on being faked by AI, can challenge traditional reliance on visual evidence, thus introducing scepticism that can obstruct the pursuit of justice and chain of custody. According to experts Vazquez and McDermott, user-generated evidence, including open or closed source information, is increasingly being leveraged to inform judgement central to the work of United Nations human rights investigations.
- Disrupt Democracy: Deepfake audio clips of public figures can potentially disrupt democracy. Even more, dismissing real audio or video clips of public figures because they are AI-generated by the opposition can deepen the disruption effect on democracy as it creates an environment where misinformation and disinformation drive, ultimately weighing on the foundations of democracy.
The Way Forward
Hence, combatting disinformation in the age of Gen-AI becomes an even more demanding task as the tools used to combat it are fast becoming tools employed in denying reality with an even stronger appearance of credibility.
There needs to be a more balanced distribution between the knowledge and usage of AI and the awareness and usage of AI checker tools, as in most instances, AI experts only possess the tech and expertise to analyse a piece of media and determine whether it is real or fake, thus leaving too few people capable of truth-squatting content that can now be generated with easy-to-use AI tools available to almost anyone.
According to Monsur Hussain, the Head of Innovation at the Centre for Journalism, Innovation and Development (CJID), because of the way people are wired, the gap between AI tools and checkers may not close anytime soon; this is because nowadays, people want to share information for engagement, but many people are not interested in knowing the source or learning how to check these contents. We would need high media sensitisation of what is real and fake for the gap bridging to transpire. He believes that the more consequences of AI-generated content we face, the more interest will be directed towards fact-checks and the democratisation of AI checker tools.