Executive Summary
This report provides an overview of the concept and application of artificial intelligence (AI). It begins by noting how technological advancements have made previously challenging tasks like simple arithmetic calculations seamless. However, these technologies lacked intelligence, leading to the development of AI—machines programmed to simulate human intelligence for automating tasks. AI is broadly defined as algorithms that enable machines to perform tasks traditionally associated with human intelligence, such as learning, problem-solving, and understanding language.
The paper further discusses the mechanisms underlying AI, including using algorithms and data to recognise patterns and make predictions, a process known as machine learning. It dives into specific areas of AI, such as deep learning for recognising complex patterns, natural language processing (NLP) for understanding and generating human language, and computer vision for image recognition and classification. The aim is to provide foundational knowledge about AI’s evolution from a concept to a daily utility, ensuring comprehensibility for technical and non-technical readers. It highlights AI’s significant impact on various fields and addresses concerns about its societal implications.
The document also traces the theoretical foundations of AI, noting that the idea has existed since Ancient Greek philosophers theorised about thinking machines. The mid-20th century marked a turning point when AI became a serious theoretical consideration, with advancements continuing through the 1960s with the development of the first AI programming language, LISP.
Introduction
In the past century, tasks as basic as simple arithmetic calculations were difficult and time-consuming, and technologies capable of complex computations arose that addressed this challenge, making computation in various fields seamless. Although these machines produce accurate output quickly, they lack intelligence (Perumalla, 2023). Hence, the concept of AI is where machines with intelligence are used to automate tasks and lots more. In recent times, we have not only heard and seen but also interacted with these AI tools that bring great possibilities which could only be previously imagined into reality. It is essential to break down the concept to understand how it came to exist and its proper place in our everyday lives. In its broadest definitions, Sheikh et al. (2023) stated that “AI is equated with algorithms.”
However, this would be a bit complex for people with non-technical backgrounds. More generally comprehensible, AI is described as the simulation of human intelligence in programmed machines to think and act like humans (Duggal, 2024). That is, machines perform tasks traditionally associated with human intelligence, such as learning, solving problems, comprehending language, identifying objects, interpreting speech, etc., regarded as Artificial intelligence (Glover, 2024).
Now, these machines work by using algorithms and data. Glover (2024) explained that big data integrated with mathematical models or algorithms are trained to recognise patterns and make predictions. After training the model or algorithms, the machines are deployed in their various fields of application, where they continue to learn and adapt to new data as they are being utilised. The process of developing algorithms and models that enable computers to learn from data and make predictions or decisions without explicit programming is known as machine learning. Machine learning is typically done using neural networks, a series of algorithms that process data by mimicking the structure of the human brain. Further, Deep Learning allows a machine to go “deep” in its learning and recognise increasingly complex patterns, making connections and weighting input for the best results.
Meanwhile, Natural language processing (NLP) teaches AI systems to understand and generate written and spoken language like humans. NLP focuses on speech recognition and natural language generation and is applied in various areas, such as spam detection and virtual assistants. Computer vision is also used for tasks like image recognition, image classification, and object detection. It is also used for facial recognition and detection in self-driving cars and robots (Glover, 2024).
This paper provides an overview of artificial intelligence, which has progressed from a mere concept to become a part of our daily lives, available and accessible to everyone. It aims to ensure that readers from both technical and non-technical backgrounds, regardless of their field of work, can understand what AI entails, how it came to be, and its revolutionary impact on organisations’ use cases. It also addresses recent concerns about the challenges associated with its impact on society.
Theoretical Foundation of AI
The significance and evolving impact AI has exerted in today’s world could be better captured starting from the origin. According to Marr (2024), there was a time when Artificial intelligence only existed in science fiction. Duggal (2024) also reckoned that the latest innovations and advancements that have been transformed into reality have previously been solely in science fiction. Further from those days, we could only see AI possibilities in movies, where robots were portrayed as walking, talking and acting like humans, although lacking emotion (Marr, 2024). However, this was not even the beginning, as the concept of AI had existed even longer than technology to the time when Ancient Greek philosophers had theorised about thinking machines and perceived the human brain as a complex mechanism capable of recreating or simulating one day.
In the words of Kokkindis (2023), “Ancient Greeks not only created the foundations of modern civilisation, but they also predicted robots and other future technological innovations.” Thus, this era can be described as one upon which the theoretical foundation of AI was built. By the mid-20th century, the idea of AI as a “thinking computer” had become less fantastical and had entered the realm of serious theoretical consideration (Sheikh et al., 2023). At this time, the world saw the emergence of computers with their ability to solve complex problems and perform statistical analysis. These theories were, however, mainly limited to academics and entertainment at this time (Marr, 2024).
As technology continued to advance, AI research gained momentum in the 1960s, given the first AI programming language, LISP, developed by John McCarthy (Press, 2016). From this time to the 1980s, it became apparent that it was more effective to train machines with data to find their solutions than by traditionally feeding them with instructions, which often fall short when faced with complex tasks (Marr, 2023). Therefore, there was a shift in focus in the next decade towards machine learning and data-driven approaches, which became enhanced by the increased availability of digital data and advancements in computing power. This period saw the rise of neural networks and the development of support vector machines, which allowed AI systems to learn from data, leading to better performance and adaptability (Marr, 2024).
In the 21st century, AI research has expanded into new areas with machine learning advancements. The period is witnessing natural language processing, robotics, and computer vision, moving AI from a futuristic dream to a current reality that we all now see and use (Marr, 2024). Further, the explosion of big data resulting from digital activities and the increased processing power made it possible for neural networks and algorithms to become more sophisticated in the 2010s. The present 2020s is currently witnessing what has been described by Marr (2023) as AI’s explosion, which has been attributed mainly to the development of deep learning techniques and the emergence of large-scale neural networks, such as the Generative Pre-trained Transformer (GPT) series by OpenAI. GPT-3, released in 2020. This is a prime example of how AI has evolved, with generative tools like ChatGPT and DALL-E making AI more accessible and user-friendly for everyday applications.
Classification of AI
Despite its trail of possibilities, several functions of AI remain to be realised. Here, AI is being classified in terms of its usability and types, highlighting the current realities of AI functions and future possibilities in theory.
AI by Capability
According to Guggal’s (2024) work, AI was categorised into two broad categories: weak and Strong AI.
- Weak AI: AI ranging from voice assistance (examples include Cortana, Alexa, Siri, and Google assistance) to recommendation Algorithms on social media, Netflix, chatbots, image recognition systems, etc., that function with human intelligence is regarded as weak AI. They operate within designed tasks and lack general intelligence (Glover, 2024). When you turn on your YouTube and recommendations pop up or targeted ads flood your timeline, that is weak AI at work.
- Strong AI: The strong AI group encompasses systems that match or exceed human intelligence. These systems can have a broad spectrum of activities, including understanding, reasoning, learning, and using knowledge to tackle complex problems for which they haven’t been specifically trained. Also known as Artificial General Intelligence, this type of system remains largely theoretical today (Gugaal, 2024). Strong AI has been hinted at in Hollywood movies, mostly as intelligent robots capable of thinking and doing things faster than human beings.
AI by Types
AI categorisation according to the perceived type of function they perform was covered in Glover’s (2024) article. We have:
- Purely Reactive: These machines carry out specific commands and requests but do not have any memory or data to work with. Thus, they cannot store memories or rely on past experiences to inform real-time decision-making (Glover, 2024). They are limited to specialised duties, such as Netflix’s recommendation engine and IBM’s Deep Blue (used to play chess). The last time you wondered why you kept seeing ads on smartwatches was because you once searched how much it was. It simply “reacts” to prior choices, selections, interests, etc.
- Limited Memory: Unlike reactive machines, these AIs that fall under this category can store previous data and predictions when gathering information and making decisions. Although they have enough memory or experience to make proper decisions, their memory is minimal (Guggal, 2024). Examples include ChatGPT and self-driving cars (Glover, 2024).
- Theory of Mind: This kind of AI has only been envisaged theoretically. It involves machines that perceive and understand thoughts and emotions and interact socially. Then, they use that information to predict future actions and make decisions independently.
- Self-Aware: As the name implies, this type of AI has self-awareness or a sense of self. It is regarded as the future generation of these new technologies and exists only in theory. It is intelligent, sentient, and conscious. A good example is advanced cyborgs and projected robots with machine capabilities to learn and imbibe emotions.
AI Democratisation in the 2020s
The democratisation of AI aims to enhance the accessibility and usability of AI technology for a broader audience, including individuals without specialised technical knowledge. This involves reducing barriers to entry for AI development and deployment and empowering people and organisations to harness AI for their specific needs (Rao, 2020). This has been made possible through the breakthroughs in AI with Generative tools like ChatGPT, and DALL-E that have enabled AI to be used across industries, revolutionising healthcare, finance, and manufacturing and extending to our daily interactions as individuals. AI’s growth has become exponential, permeating every aspect of modern life, from Google and Facebook to online shopping and personalised research and learning (Perumalla, 2023). Through these mediums, it has become more accessible and user-friendly for everyday applications, empowering individuals and businesses to leverage its potential.
There is almost no industry in which AI does not impact in this 21st century, including healthcare, finance, automotive, media, etc. (Emmanuel, 2023). While democratising AI technology offers numerous benefits, it also presents challenges. Concerns include the potential for misuse of AI, ethical considerations, and the need for responsible development and deployment (Lawton, 2024). Addressing these challenges is crucial to ensure that AI benefits society. Several tech experts have also posited that technological innovations lead to various positive and negative consequences, which is valid for artificial intelligence. There is also the general trepidation by common people that AI has the potential to take their jobs, and even worse, AI “takeover” is underway. Despite AI’s impressive progress, challenges and ethical concerns must be addressed as technology evolves (Cain, 2023).
Following a review of literary works, some of the critical moral and regulatory challenges associated with the widespread adoption of AI include:
- Privacy and Security: The issue of privacy as it relates to AI has been debated widely. Specifically, concerns have been about what is acceptable regarding privacy in the context of AI and how AI handles personal data. For instance, concerns exist regarding how individuals’ data is collected, stored, and utilised. These concerns ensure that individuals’ privacy and human rights are safeguarded against data breaches and unauthorised access. Since AI-based technologies feed on advanced databases, how can users’ data be protected, and what are the limits?
- AI bias and discrimination: Another issue is addressing inherent biases in AI algorithms. There have been some concerns and discussions about how data is used to train AI or how the algorithm tends to be prejudiced. This implies that if the data used in training a model is incomplete or biased, such as containing gender or racial biases, the AI system may learn and perpetuate those biases and produce biased or discriminative outputs. Hence, the call to resolve this is to prevent AI systems from perpetuating or amplifying biases. Actors can also use these algorithm biases to perpetuate lopsided trends or disinformation with a malicious intention that can be extremely detrimental to our societies.
- Transparency and Accountability There are questions about the transparency of AI systems. According to Adrianne (2023), users will be more likely to trust AI systems to make decisions when they understand how such a system arrives at a particular output or decision for them. Hence, they need to be explainable and accountable.
- Intellectual property rights: There are also debates around intellectual property rights in the context of AI-generated content and creations. There have been questions about who can commercialise it, who is at risk of infringement, etc. As argued by Capitol Technology University, lawmakers must clarify ownership of rights and provide guidelines to navigate potential infringements. For example, in the context of AI-generated Art, who owns the art: the human user who generated it or the developers of the AI system?
- Job Replacement: There has also been perceived fear surrounding the discussion of replacing humans in the workspace. This concern stems from the ability of AI to automate tasks traditionally performed by humans. While some have argued for this case, conversely, some have argued that AI has the potential to create far more jobs than it destroys.
- Social Manipulation and Misinformation: Recently, we have seen the deployment of AI tools in news and articles in the context of elections, governance, and beyond, such as deepfakes, AI-generated pictures, automated social media posts, etc. It is general knowledge that fake news, misinformation, and disinformation are commonplace in politics, competitive businesses and many other areas. There have been instances where AI tools have been used to manipulate or influence public opinion, thus amplifying disinformation/misinformation. When an AI-generated image depicted Trump evading police arrest, most online users who came across it online thought it was genuine. Although sometimes it is not difficult for users to spot AI-generated content, the challenge of cognition and confirmation also intensifies negatively how people deal with AI-generated content.
Unlike the technological problems of the past, these ethical and regulatory challenges will likely not be solved by increasing processing power and data. Instead, the focus should be on developing the right policies, guidelines, and frameworks to ensure AI is used safely and responsibly as it becomes more integrated into our lives. In 2021, the European Union drafted a proposed AI regulatory framework called “The Artificial Intelligence Act (AI Act)”, which can be described as the first major regulatory document created. The document introduces AI principles and a legal framework for its member states with specific objectives. Other bodies, such as Singapore, have launched a national AI strategy, and companies like Google, Meta, and Microsoft have already adopted the Singaporean framework to confirm their AI governance credentials. There have also been other efforts contributed by IEE, OECD, IBM, and Microsoft, toward this ethical practice and regulation as discussed in this research here.
Furthermore, there is the Africa Union (AU) AI policy draft, which seeks to increase African states’ representation and influence within global AI governance structures (Onyekachi, 2024), among other things. Seeking to guide African countries in harnessing the potential of AI to attain socio-economic development, it was drafted in February 2024 to guide AI regulation by the 55 member states of the AU, encouraging countries without existing AI policies to use it as a framework while those with AI regulations already were encouraged to renew and align their policies to ensure consistency among AU members (Sulaiman & Olen, 2024).
Recommendations
Our recommendations aim to shape the transformative nature of AI for the future to ensure continued positive impact from technological breakthroughs and minimisation of its negative impacts. The recommendations outlined here were inspired by literary works reviewed and insights gleaned from the research undertaken, which informed this paper.
- Prioritise ethical and regulatory frameworks: Discussing ethical guidelines and governance for AI adoption can also be challenging in itself. Any decision or stand taken today regarding regulatory frameworks and responsibility will have long-lasting consequences for how this transformative technology is used going forward. Hence, care must be taken to ensure the right policies are drafted. Taking some clues from previous policy drafts on AI adoption can provide insight and guidance for any other body or industry looking to create a regulatory document.
- The concept of Explainable AI (XAI): XAI was used in the early days of AI to ensure transparency and trust in systems. This should also be encouraged in these present times to aid in obtaining users’ trust in the modern development of AI systems. This would also help reduce the potential for algorithmic bias (Cain, 2023).
- Collaboration Between Stakeholders: This recommendation is in line with the requests made to encourage partnerships between policymakers, technologists, and domain experts to create balanced and effective frameworks.
- Incentivise AI Responsibility: To promote responsible AI development and deployment, incentivising AI developers and companies to prioritise ethical considerations in the design and deployment of their systems. This should also be followed by a third-party mechanism to audit and test to identify and mitigate potential harms or biases.
- Invest in AI education and literacy: Develop educational programs and public awareness campaigns to help people understand AI’s capabilities, limitations, and potential impacts. This will also help them make informed choices about using AI in their personal and professional lives.
Conclusion
The development of AI is a testament to human creativity and perseverance. AI has significantly progressed from its initial theoretical frameworks to today’s sophisticated algorithms. As AI research advances, it explores new frontiers and promises even more remarkable transformations in the years to come, as seen in the classification of AI discussions.
However, we also noticed that, despite its promising potential, there are significant concerns about the impact of the AI revolution, including fears of AI replacing jobs in industries where manual labour tasks can easily be automated. Privacy concerns, security, bias, and the possible misuse of AI are critical issues that require careful consideration and regulation if we are to strike a balance between innovation and ethics to mitigate potential risks (Singh, 2023).
The journey from theory to reality is just the beginning. AI’s ethical implications and responsible development remain paramount as we navigate this powerful technology and its potential impact on our future.