Cole Praise | June 1, 2023
The advent of AI
Artificial Intelligence, commonly called AI, are not independent by themselves. They have become so sophisticated that they can output answers based on requests.
The world is leaning towards reliance on this software to help perform very complicated tasks, including making predictions of events that will happen many years from now.
Humans have found this new computerised system a great plaything to fondle with, even to the point of data manipulation. But again, AIs are not independent by themselves.
The working wonders of AI and its growing impact on information efficacy
For AIs to operate, they rely on human data. From birth date, age, definitions of concepts, cumulative occurrences of real-time events, signs and symptoms of diseases to the futuristic market price of a commodity.
In other words, human beings feed this software with the details of global activities and program them to operate based on what they are asked to perform and based on the data available to them.
For instance, have you ever wondered how your email account could filter out spam messages? Well, it’s all thanks to the brilliant minds of programmers who have imbued the system with certain characteristics to identify and block unwanted messages. But the problem with this approach is that spammers can adapt their techniques by adding more complicated features to bypass the filters as soon as the spam detection tool becomes smarter.
It’s a constant cat-and-mouse game between the developers and the spammers, each trying to outsmart the other.
This technological arms race fuelled by AI has the potential to impact the growth and sophistication of disinformation campaigns greatly. Already, there has been a surge in the number of tools capable of generating convincing fake content, posing a serious threat to the integrity of online information. Some of these tools include:
Prank Me Not
Prank Me Not allows social media users to clone status, upload profile pictures, and create conversations on Facebook, Twitter and Messenger. Users can amply utilise the tools featured on the app and create a social environment that never happened. They can then post to their friends and families, telling them they took place on the platforms they cloned.
Deepfakes Web
Deepfakes Web allows users to take a video and deliver it to the app through the cloud, where it is subjected to digital manipulation. The user has to make the video material that will be uploaded in a high resolution to retain the image properties of the video. Users have to essentially imitate the gestures of the persons in the targeted video not to compromise the duplicity of the video.
Lensa AI
Lensa is not deep fake AI per se. It differs from the AI variety that will paste your face on an entirely different image. But it is credited with the ability to craft real-life pictures of a person only with a few selfie appearances.
It can make realistic portraits of a person in different styles, including fantasy models like superheroes.
Dall-E Mini
Adapted from OpenAI’s Dall-E, it can create images from text prompts. The AI has over a million collections of pictures from the internet—so little surprise as to how it can create picture varieties, including unrealistic ones.
Zao
It is a deepfake with video clips, including scenes from popular movies. It exploits the scenery features of these videos into convincing videos such that both the original and the counterfeit aren’t distinguishable.
Wombo
Wombo is a lip-syncing app that transforms anyone into a singing image. One can choose from 15 songs; the character will sing through the image. This feature has given the app precedence ahead of Instagram reels, YouTube shorts and TikTok.
Reface
Formerly named Doublicat, it makes fun GIFs. A person must capture and place an image of himself in a preferential GIF. It fits the person’s face into the GIF, but depending on the symmetry of the face and the type of GIF. The app can be a personalised GIF maker that can create GIF memes to impress others.
AI tools: An open sesame to unlimited online fabrications
The rapid evolution of AI tools has made it increasingly challenging to distinguish between fact and fiction in news dissemination. Despite the prevalence of individual biases in society, which have been a significant driver of misinformation and disinformation, the situation has reached a critical level.
With ubiquitous internet access and the proliferation of AI-powered social media, users can create and disseminate a deluge of misleading content, exacerbating the problem.
For instance, social and mainstream media erupted when the People’s Gazette reported an alleged phone conversation between Peter Obi, the 2023 presidential candidate of the Labour Party, and Bishop David Oyedepo. While some online users questioned the call’s authenticity as AI-generated, others dismissed AI-generated insinuations. The extent of AI’s influence on human activities and opinions and its accuracy are critical issues to consider. Whether the phone conversation was genuine or not may never be known due to the power of the opinion algorithm.
What will never be known, and what we can do
We do not have a situation of sourcing for what is truth. Rather, we have a complicated battle of AIs confronting AIs that leads us to conflicting resolutions.
That means that acts can be committed, and no one will ever know if it happened. It also suggests that doubts will be sown in people’s minds regarding the outcomes of forensic findings, and the purpose of information and news reporting will be defeated.
At this juncture, humans are slowly turning slaves to the submissions of artificial intelligence, with individual sentiments determining whether to believe the outcome or otherwise. This is why it is important to set the ball rolling and implement the following:
- Develop AI-powered fact-checking tools: Based on the issues identified, there is a need for AI-powered fact-checking tools to be developed and deployed to combat disinformation campaigns. These tools will help to quickly identify and flag false information and provide users with accurate and verified information. Although tools such as InVid, Deepware, Hugging Face etc., have been very helpful, the constant AI evolution calls for more input.
- Increase public awareness: Public awareness campaigns should be launched to educate people about the potential for AI-powered disinformation campaigns and how to identify and avoid false information. Over the past six years, organisations such as DUBAWA have been championing this cause to help people to be more critical when consuming information online.
- Collaborate with tech companies: Tech companies should collaborate with governments, researchers, and civil society organisations to develop tools to detect and mitigate disinformation campaigns. This can include developing algorithms to identify fake news and deepfakes and the tools to monitor and report false information. Although many of these collaborations exist, more will extensively prepare society for the inevitable AI future.
- Encourage ethical AI development: There is a need for moral AI development to ensure that AI-powered tools are developed and used responsibly. This includes ensuring that AI tools are transparent and accountable and do not discriminate against any group of people, especially since most of these AI fabrication tools are free.