Photocredits: the Wall Street Journal
The explosion of the emergence of Generative Artificial Intelligence or GenAl and the widespread adoption of its use for various purposes have renewed old concerns about the potential misuse of technological tools, particularly in this age of swift information sharing across social media platforms and other aspects of the internet. For instance, CNN reported earlier this year a case of a $25 million scam in Hong Kong resulting from a “deep fake video call”. This explosion has been a source of enthusiasm and exploration for the expected boundless opportunities the technology can offer. This is the reason behind the various National AI Strategies proposed by different African governments, including Nigeria, South Africa, and Kenya.
However, alongside this exploration for opportunities comes the logical concern that this technology can be exploited for disinformation and the potential disruption of democratic norms.
Ethical Concerns and Risks
According to UNESCO, the UN’s primary agency for collaboration on education, science and culture, there are several ethical concerns associated with AI technology in the context of information sharing, but the three main ones include:
Bias and discrimination: This refers to AI’s susceptibility to follow and adopt existing biases and discrimination, reflecting human inadequacy and furthering discrimination and inequalities at a speedier and much larger scale. These biases can be gender-based, racial, regional, socio-economic status, etc. A pertinent example is the growing cases of deep fake nudes of famous women and sometimes even underage girls.
Transparency and explainability: Because AI tools draw information from different sources across the internet at lightning speed, unlike humans, GenAI tools such as ChatGPT, Meta AI, and Gemini cannot be transparent about how certain information is arrived at and, therefore, lack the crucial quality of explainability. An illustration of this would be providing the justifications for the reasoning behind every AI choice and the important variables influencing the AI system’s result. The UK’s Department for Science, Innovation & Technology’s A Pro-Innovation Approach to AI Regulation is perhaps one of the best guides to achieving that.
Accountability: This refers directly to the question of who will be held responsible when Al generates fake, misleading, manipulative or harmful content such as deep fakes and false expert data. For example, a Federal Court in the US decided a year ago that an AI system cannot be identified as an inventor on a patent or assert copyright since it is not a human thing. More recently, courts have decided that content produced by artificial intelligence (AI) should belong in the public domain rather than be credited to the people who developed the AI system or the person(s) who made the prompts that led to the creation of the content. Because AI systems make decisions with little direct human input, deciding who to hold accountable is a dilemma. This makes the risks for AI disinformation very high as there may often be no one to blame directly.
Going further, UNESCO highlights three broad types of risks relating to the increased adoption of various AI technologies:
Technical limitations: These include the tendency to amplify harmful bias, a penchant for lack of accuracy while sounding, looking or seeming genuine, also known as “Al hallucinations.”
Harmful AI products initiated by humans: These involve creating tools for deep fakes and shallow fakes and deploying AI tools to conduct hostile information campaigns, such as Twitter bots.
Result in human-machine interactions: A situation where humans become excessively reliant on Al tools such that simple human tasks are outsourced to AI, resulting in automation bias, over-reliance, etc.
However, the Council of Europe puts these types of concerns related to AI at six. They include: inconclusive evidence, inscrutable evidence, misguided evidence, unfair outcomes, transformative effects, and traceability. These concerns in turn, give rise to certain risks or “ethical challenges”, as illustrated in the diagram below:
Common ethical challenges in AI, Council of Europe.
The Way Forward
Researchers at Europol suggest that by 2026, as much as 90% of online content may be synthetically generated, that is generated by AI. This also means an increasing amount of misleading content will be created using AI technology. Although these ethical concerns and risks apply across various areas of social life, they also pose significant challenges and threats to the information ecosystem. This, in turn, disrupts democratic norms, social cohesion, and collaboration and can impact global stability and safety as aptly captured in this recent DAIDAC article.
Therefore, to address both the dangers of AI as well as its potential benefits, governments and institutions have begun exploring mechanisms that can enable adequate exploration of and protection from the impact of AI. For instance, the Nigerian federal government recently announced its plans to create a comprehensive National AI strategy, which is carried out by Nigeria’s Ministry of Communications, Innovations and Digital Economy, led by Bosun Tijani. Shortly afterwards, the Kenyan government announced its plans to develop its own National AI strategy in collaboration with Germany’s GIZ with support from the German Federal Ministry for Economic Cooperation and Development (BMZ) and the EU’s “FAIR” initiative.
The uptick in African countries moving to prepare themselves for the looming disruptions and opportunities that AI technology presents follows significant events occurring in technological advancement and the global policy space. These include the wide-scale adoption of UNESCO’s 2021 Recommendation on the Ethics of AI; the UK’s AI global conference that was held in 2023 and hosted several countries, including China; and the EU’s landmark AI bill passed late last year.
However, it can take years to determine and adopt comprehensive and specific regulations, laws, and other relevant mechanisms. Hence, it is important to look at existing mechanisms of digital ethics and safety and fight different forms of information disorder, as well as others relevant to the information ecosystem. These mechanisms have been grouped into three broad categories:
International Mechanisms
These refer to international agreements or other commitments that are available on the global stage, which Nigeria is either a signatory to or can adopt in combatting the risk of AI-driven disinformation.
UNESCO’s 2021 Recommendation on the Ethics of AI
The Recommendation proposes a global framework that sets the standards that UNESCO Member States can adopt for the ethical use of AI. It looks at potential ethical issues and how AI regulations might be created to guarantee that technology is produced and applied to benefit people, the environment, society, and humanity. It boasts of signatories from 193 member states, including Nigeria.
This is perhaps one of the largest and most broad-ranging partnerships on AI in the world. Members from academia, civil society, and business are included in this multi-stakeholder organisation. It engages its members in conversations about how AI should be used and how it will affect society. A dedicated working group of the organisation develops concepts and tools to identify and reduce the dangers associated with misinformation and disinformation.
Also, there are other international conventions or agreements relating to internet governance, online safety and data protection, the right to a free and pluralistic press, etc.
Local Mechanisms
- Fact-Checking Organizations and Investigative Journalism
In Nigeria, organisations like Dubawa, Africa Check, and others work to verify facts and dispel myths. These groups track and study social media information trends using AI methods, spot false information, and give the public access to verified information.
Organisations like the CJID train and mentor of more investigative journalists to use different verification, research and data collection techniques to help verify information and fight disinformation. This includes Open Source Intelligence (OSINT) workshops, research workshops, etc.
- Legislation and Policy Measures
Cybercrimes Act 2024: This act prohibits disinformation-spreading, cyberstalking, and cyberbullying. It introduces important upgrades to strengthen Nigeria’s cybersecurity framework by addressing new cyber threats and enhancing regulatory compliance. It expands on its predecessor, the Cybercrimes Act 2015.
Nigeria Data Protection Act 2023: Substituting the previous Nigeria Data Protection Regulation (NDPR) with the Nigerian Data Protection Act 2023, Nigeria offers a more comprehensive legislative framework designed to safeguard personal data in the country. By outlining the obligations of data controllers and processors, guaranteeing the rights of data subjects, and creating unambiguous guidelines and principles for data processing, this Act brings Nigeria into compliance with international data protection standards, such as the GDPR.
Collaboration and Education
To effectively inform people about the risks posed by false information and the methods for spotting reliable information sources. Initiatives such as UNESCO’s are designed to get people thinking critically and to increase audience awareness. In addition, events, projects and campaigns carried out by media-focused CSOs like the Centre for Journalism Innovation and Development (CJID) are crucial and effective mechanisms to combat the use of AI in disinformation. Furthermore, to tackle disinformation holistically, cooperation between governments, tech corporations, civic society, and international organisations is essential. To combat misinformation, projects like the Digital Public Square initiative and the Global Disinformation Index (GDI) encourage data exchange and international collaboration.
Conclusions
Therefore, mitigating the risks of AI-assisted disinformation involves a broad combination of regulatory frameworks, technological innovations, collaborative efforts, and public education. While significant strides continue to be made, ongoing adaptation and vigilance are necessary to address the evolving nature of disinformation. Existing tools and mechanisms that are already in place and enjoy widescale adoption or practice continue to be the most efficient way of combatting information disorder, protecting individual rights and safety, and safeguarding democracies.