PhotoCredits: blog.emb.global
Introduction
The world is moving at a fast pace with advancements in technology. Artificial intelligence tools are one of the sophisticated technologies used for deepfakes. According to iProov, deepfakes are not new, the tools used to create them are increasing. Also, the detection of sophisticated tools for artificial intelligence will continue to impact how data is being manipulated.
For example, the growth in e-commerce and online stores has also brought an increase in those selling counterfeit products. As a result of that, online retailers are using AI tools to differentiate between the original and fake product in order to gain consumer trust. Even worse, the contribution of social media to the spreading of deepfakes is enormous. The way people share content on social media is important. According to research, videos and photos are more likely to be shared on Twitter compared to news articles or online petitions. During the US Presidential campaign, tweets from Donald Trump and Hillary Clinton with images and videos have received significantly more likes and retweets as found by researchers. Even more, getting the public to differentiate between real and fake deepfake content calls for a campaign and public enlightenment. Most times, as a result of the trust people have in social media information, deepfakes contents, (A photo or video that has been skillfully edited and altered to falsely portray someone as having done or said something they didn’t actually do or say.) would have caused havoc before they were detected.
In Nigeria, for instance, a conversation between the presidential candidate of Labour Party Peter Obi and the founder of Living Faith Worldwide Bishop David Oyedepo went viral just weeks to the 2023 general elections. The alleged leaked audio showcased a voice implied to be that of Obi urging the clergy to canvas for Christian votes for him. The post sharing the audio has been viewed over 10.3 million times based on Twitter statistics. Though the presidential candidate has denied the circulating audio and his party regarded it as a deepfake, A forensic audit by a Twitter handle named Democracy Watchman, also found the audio to have been manipulated. However, a rather opposing finding by Foundation for Investigative Journalism suggested otherwise. Whatever the actual truth may be, such viral audio can affect the trust people have in their proposed candidates. This can also influence the electoral outcome.
What are Deepfakes?
Deepfakes are synthetic media informed by audio, images, video which are generated through artificial intelligence and machine learning algorithms. Mostly, they are shared on social media with wide virality. According to DeepMedia, about 500,000 videos and voice deepfakes were shared on social media around the world in 2023. This is expected to reach 8 million by 2025, with the possibility of doubling every six months. Studies conducted by iproov show that one-third of global consumers have knowledge of what deepfakes are. This signifies that misinformation has the potential to spread rapidly across different platforms.
How deep fakes influence misinformation
Deepfakes generate wide virality when they are shared. They distort the authenticity of real content because of their high similarity. The misemploy of deep fakes can constitute a huge threat by provoking social unrest, manipulating elections or inciting violence through the spreading of falsehood that blurs the lines between reality and propaganda. Deepfakes are powerful tools for disseminating misinformation and malinformation, which challenges society. (Choras et al., 2021). Ukrainian President Volodymyr Zelensky was seen calling on his soldiers to surrender their weapons. Whereas, it was an attack uploaded to hack Ukrainian news.
Some authors shared ways in which deepfakes contribute to the sharing of information, which are Authenticity and Trust, Amplification of False Narratives, Virality and Speed, Targeted Manipulation and Difficulty in Detection.
- Authenticity and Trust (Godulla et al., 2021):
Due to the similarity between deepfakes and real contents, it is hard to differentiate, which challenges the authenticity of the real content. With this, misinformation will likely get some credibility from people who would not be able to suspect the differences.
2. Amplification of False Narratives (Chesney & Citron, 2019):
Since the essence of deepfake is to control the opinion of people, the possibility of manipulating people’s voices or images to portray who they aren’t is high. These negative narratives will affect people’s opinions and emotions, which can amplify misinformation.
3. Virality and Speed (Hobbs, 2020):
Most deepfake are given a high but deceptive quality, which can attract . Through the use of social media, deepfakes can go viral within a short period of time, which aids in the spread of misinformation. Most times, when such information is debunked, the damage may not be rectified.
4. Targeted Manipulation (Hartmann & Giles, 2020):
Some deepfakes are specifically shared to attack some public figures or key people who are influential. Through it, it can affect the perception of people towards them and redefine people’s perception towards making a decision. For example, some malicious misinformation regarding investment can push people to make decisions they are not expected to make.
5. Difficulty in Detection (Gosse & Burkell, 2020):
The tragic thing about deepfakes is that it will require another more sophisticated artificial intelligence to detect and debunk deepfakes. Most especially the audio, video and images. The close reality between the deepfake and real content makes it difficult to ascertain which leads to another challenge for misinformation.
The Perils of Disinformation
Despite some helpful functions of Artificial intelligence, the negative use of it can cause tremendous damage. A typical example of the perils of disinformation is the aftermath of what it can cause. For example, In India, a WhatsApp ( instant messaging app) viral video shows footage probably from a CCTV camera showing certain children playing cricket around the street. Two men on a motorbike instantly grab one of the kids and then speed off. This video created widespread confusion and panic for about an 8-week period of mob violence that killed at least nine innocent people. Whereas the video was an edited video from a public education campaign in Pakistan which was designed to create awareness of child abduction. The video started with the kidnapping, but after one of the actors got off the motorbikes and showed a sign cautioning viewers to look after their children. In the viral video, this scene was cut off and left with a shockingly realistic scene of a snatched child. ( BBC News, 2018). This incident is one of many ways disinformation content can negatively impact the public
Mitigating Against the Threat of Deepfakes
According to Deloitte on How to safeguard against the menace of deepfake technology, The battle against digital manipulation (March 2024), the following are measures noted that can be used to mitigate against the threat of deepfakes.
1. Emerging technology: Through the use of a watermark, embedded information can be included in an audio to differentiate it from deep fake content. This will be helpful in verifying the audio.
2. Continuous monitoring: it is essential for organisations and people to continuously have a close look with audio, video and image content.
3. User awareness: Campaigns should be created on the effect of deepfakes so as to encourage people to be cautious about malicious information.
4. Analysis of stored audio/facial data: Biometric information should be combined with other factors, such as password and PIN, to reduce the risk of having only one factor verify a transaction.
5. Behaviour analysis: A close look at deep fake content will signal some abnormal reactions, such as tone and lip-sync, which will make it detectable.
Conclusion
This article emphasises the importance of creating awareness and advanced technology in combating the threats posed by deepfakes. Deepfakes content can affect information consumption. Many believe what they see online without giving it a second thought to verify the authenticity. To ensure that deepfakes content does not contribute to information disorder through misinformation, disinformation and malinformation, people need to be informed about the prevalence of deepfakes.
Reference
Al-Khazraji, S.H., Saleh H.H., Khalil, A.I., & Mishkal, I. A. (2023). Impact of deepfake technology on social media: Detection, misinformation, and societal implications. The Eurasia Proceedings of Science, Technology, Engineering & Mathematics (EPSTEM), 23, 429-441.
BBC News. (2018, June 11). India WhatsApp “child kidnap” rumours claim two more victims. https://www.bbc.co.uk/news/ world-asia-india-44435127
Chesney, B., & Citron, D. (2019). Deep fakes: a looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753.
Choraś, M., Demestichas, K., Giełczyk, A., Herrero, Á., Ksieniewicz, P., Remoundou, K., Urda, D., & Woźniak, M. (2021). Advanced machine learning techniques for fake news (online disinformation) detection: A systematic mapping study. Applied Soft Computing, 101.
Godulla, A., Hoffmann, C.P., & Seibert, D. (2021). Dealing with deepfakes–an interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72-96
Goel, S., Anderson, A., Hofman, J., & Watts, D. J. (2015). The structural virality of online diffusion. Management Science, 62(1), 180–196.
Gosse, C., & Burkell, J. (2020). Politics and porn: How news media characterizes problems presented by deepfakes. Critical Studies in Media Communication, 37(5), 497-511.
Hartmann, K., & Giles, K. (2020). The next generation of cyber-enabled information warfare. 12th International Conference on Cyber Conflict (CyCon),1300, 233-250. IEEE.
The Real Person!
Author hotel in boltenhagen acts as a real person and passed all tests against spambots. Anti-Spam by CleanTalk.
Thank you for sharing this insightful post! Your writing is clear, informative, and engaging. I appreciate how you’ve broken down complex concepts into easily digestible parts. It’s evident that you have a deep understanding of the topic, and your tips are practical and actionable. I particularly liked the way you addressed [specific point from the article], as it resonated with my own experiences. This kind of content is invaluable for readers looking to expand their knowledge and apply new strategies effectively. Looking forward to reading more from you. Keep up the excellent work!