
As the world continues to evolve with technological advancements, we awaken every day to new inventions, such as artificial intelligence (AI) and machine learning (ML) tools, which aim to simplify our lives. However, are the invented tools making our lives easier or causing more harm? AI has revolutionised the media landscape by enabling unprecedented levels of photo, audio, and video manipulation.
Ever imagine the power of Al? Just imagine creating your business or work logo without breaking the bank, having creative assistance to generate ideas, having a language translator in an unknown country, simplifying large documents in one go, composing a musical piece, and creating a voiceover without using your voice. These and more are all the things AI can do. However, these benefits also come with potential risks, such as over-reliance on AI-generated content, loss of originality, possible copyright infringement, misleading translations, job displacement, and misinterpretation of complex documents. Also, the same technology can be weaponised to spread manipulation in the form of disinformation, deepfakes, and fake news by malicious people.
In AI-generated media content, manipulation refers to altering information (images, video, and audio) with AI tools for purposes, such as disinformation, entertainment, and defamation. This issue of generated media content can be traced back to the era when Photoshop was the trend, long before AI emerged. Though AI-generated content offers immense creative opportunities, it raises profound ethical concerns.
In one way or another, we have all come across video or picture manipulation without even knowing it. Instances include the viral image of the Labour Party presidential candidate of the 2023 Nigerian elections, Peter Obi, paying a visit to President Tinubu in Aso Rock Villa; and the video of Elon Musk announcing the intention to build a hotel in Nigeria; the video of Donald Trump commenting on Tinubu’s administration. These are all examples of AI-generated content. These instances are cases of manipulation that caused a lot of misconceptions among people, highlighting the concern that AI contributes a lot to the spread of misinformation.
Source DUBAWA
AI-generated content can seamlessly alter reality, making it increasingly difficult for one to distinguish between fact and fiction. Deepfakes exemplify the mentioned concern. Deepfakes are AI-generated videos that mimic real individuals, and when used maliciously, deepfakes can spread disinformation, damage reputations, influence public opinion, and undermine trust in the media. Malicious actors have been known to create deepfakes for malicious reasons. These actors have negatively influenced electoral processes, instigated wars and conflict, and spread defamation. They can develop deepfakes featuring real people doing or saying things they never did, to spread disinformation and ‘fake news’ or create chaos, while others do it for entertainment.
In 2016, there were allegations that social media bots were used to distort the U.S. election. Research revealed that nearly 2.8 million unique users generated over 20 million tweets. Notably, the researchers estimated that approximately 400,000 bots contributed to the political discussion surrounding the Presidential election, producing around 3.8 million tweets equivalent to about one-fifth of the total online conversation.
The growing availability of disinformation and deep fakes profoundly impacts how people perceive authority and information media. The volume of fake news is increasing, undermining trust in authorities and official facts. Thus, to mitigate this risk, creators must clearly label manipulated content to avoid deceiving audiences.
The European Union’s AI Act, recently published in March 2024 and formally adopted in May 2024, stated that generative AI like ChatGPT will not be considered high-risk but must comply with transparency requirements. One of the requirements is that “content that is either generated or modified with the help of AI—images, audio, or video files (for example, deep fakes)—needs to be clearly labelled as AI-generated so that users are aware when they come across such content.”
Consent is another ethical concern to raise when it comes to generating content from AI. Creating deep fakes or using someone’s personal information (image or voice) without consent raises privacy and security concerns. For instance, using the face of any person to create a deep fake video without the person’s consent is unethical. In the proper sense, this falls under defamation and invasion of the privacy of the person whose image is being used. Both of which are punishable acts under the legal system.
Furthermore, AI systems pose privacy threats through their design, development, and deployment. They often use personal data without proper consent, leading to unauthorised collection, exposure, or profiling, infringing on individuals’ rights to privacy and autonomy. As a result, people may face unwanted influences that hinder their ability to choose and pursue their goals freely.
The use of deepfakes for other illegal purposes could be regulated by applying defamation or copyright infringement legislation. Take the story of Davide Buccheri, for instance. Davide was sentenced to jail and ordered to pay £5,000 in compensation after he created a gallery of deepfake pornographic images of a co-worker.
Despite all the ethical concerns, fears, and issues surrounding artificial intelligence, it’s important to recognise that AI is here to stay. While fact-checking organisations work to combat fake news, effective legislation and regulation are necessary to address the threats posed by Al. To this end, the European Union’s Al Act sets a valuable precedent for responsible Al regulation. Similarly, other governing institutions should follow suit, establishing and enforcing ethical guidelines to prevent Al misuse and mitigate the risks associated with Al-generated content.