Leading minds in artificial intelligence suggest that we are on the brink of a significant breakthrough. Dario Amodei, CEO of Anthropic, posits that within one to two years, AI could surpass human capabilities across various tasks. In his essay "The Adolescence of Technology," he warns of the emergence of Artificial General Intelligence (AGI)—an AI that can perform at human-level across a spectrum of domains.
Since the introduction of generative AI tools like ChatGPT in 2022, advancements in artificial intelligence have accelerated. Experts, including Amodei, predict we are nearing the era of AGI, where machines may outthink humans.
So, what exactly is AGI?
OpenAI describes AGI as AI systems exhibiting intelligence surpassing that of humans. In clearer terms, according to David Scott Krueger of Evitable, AGI represents an AI's capacity to perform any task a human can do. There is a consensus that AGI will signify a break from current AI technologies as we know them.
Current AI models, particularly large language models (LLMs), learn from extensive text databases to produce human-like text. Generative AI encompasses such models, which create new content—be it text, visuals, or programming code—based on data patterns. In contrast, AGI would entail a system capable of learning and applying knowledge in various fields similarly to humans.
How is AGI different from existing AI systems?
Melanie Mitchell, a computer scientist at the Santa Fe Institute, notes that the latest generative AI advancements have reignited discussions about the imminent arrival of AGI. However, despite their capabilities, these systems have not yet achieved AGI status. They lack the kind of fluid, adaptable reasoning that characterizes human thought. As Krueger explains, they struggle to maintain coherence over extended periods and can falter in real-world scenarios.
For instance, Tesla's self-driving software encountered issues where it inexplicably braked in certain locations due to misleading images, such as a billboard showing a police officer with a stop sign. Unlike humans, who can understand that a billboard is not a real stop signal, the AI did not grasp the distinction.
The pressing question is: “How well do these systems generalize beyond mere memorization?” suggests Raphaël Millière, a philosopher and cognitive scientist at the University of Oxford.
What advancements will AI need to achieve AGI?
Millière emphasizes that this is the "trillion-dollar question." One promising avenue is continual learning, enabling AI systems to evolve by learning from new experiences without discarding previous knowledge—much as humans do. For instance, ChatGPT does not refresh its core model based on user interactions.
Another crucial factor is data efficiency. Current AI systems often require vast datasets to understand concepts that humans can grasp with minimal examples. Millière highlights a demonstration where an image-generating AI could depict a pelican riding a bicycle but faltered when asked to reverse that scenario. Conversely, a young child could intuitively portray a bicycle riding a pelican without prior exposure. Thus, developing algorithms that can learn more efficiently from limited data may be essential for achieving true general intelligence.
There remain unresolved concerns regarding the path to AGI, including the ambiguity surrounding the necessary improvements in AI systems and the elusive nature of the finish line in this race.
How will researchers determine when AGI is achieved?
Industry leaders are actively pursuing answers. Companies like Meta, Anthropic, and OpenAI are dedicated to developing AGI, with different assessments of how success should be measured. Mitchell notes that while several tests have been proposed, none have proven satisfactory. Some experts suggest that a machine capable of navigating the internet to earn a million dollars could signify AGI, a definition Mitchell believes misses the essence of human-level intelligence.
The challenge in predicting AGI development stems from the difficulty in defining intelligence itself. Without a universally accepted standard to measure human-level intelligence, it becomes complicated to determine when AI reaches this threshold.
While Amodei anticipates AGI in a few years, there are skeptics who argue it could take much longer—or may never happen at all. Millière reflects on the unpredictable nature of AI advancements, citing a history of overly confident predictions that later proved inaccurate.
Moreover, the absence of consensus on the definition of AGI means it remains a shifting goal. In engineering terms, researchers are divided on the requirements for AI to match the versatility and efficiency of human cognition.
Why is the pursuit of AGI significant?
In theory, AGI could enable machines to perform tasks that current AI struggles with. For example, a true AGI system wouldn’t misinterpret a billboard as a stop sign. Mitchell illustrates this by comparing it to a robot designed to load a dishwasher. Although such robots exist, they struggle with unexpected scenarios—like a dog approaching and licking the dishes. An AGI capable of recognizing such changes and adapting accordingly would embody the flexible intelligence associated with human reasoning, rather than the rigid approach of today's AI systems.
AGI has the potential to revolutionize various aspects of life, with Millière noting that these systems could automate significant parts of the research process, thereby accelerating scientific discoveries—an application he sees as incredibly beneficial.
However, concerns regarding AGI persist. In 2023, the Future of Life Institute released an open letter urging AI laboratories to pause the development of powerful models beyond GPT-4. Influential figures like Yoshua Bengio, Elon Musk, and Steve Wozniak were among over 30,000 signatories.
Furthermore, the Center for AI Safety warned of the potential "risk of extinction from AI," also backed by notable figures in AI. This year saw Geoffrey Hinton depart from Google partly to voice concerns about losing control over AI systems that might surpass human ability.
What fuels this trepidation?
Key issues involve alignment—ensuring advanced systems adhere to human goals—and interpretability, as increasingly complex AI models can behave in unpredictable ways that challenge human understanding. Mitchell warns that overstating the capabilities of these systems may lead humans to relinquish decision-making authority, fostering a dangerous dependency.
The more autonomy we grant to intelligent machines, the less we retain ourselves. Krueger warns that once AGI is achieved, human control could diminish, potentially leading to profound disempowerment for humanity.
While opinions on the feasibility and timeline for AGI differ, the overarching theme remains uncertainty—an inherent element of the ongoing dialogue surrounding this pivotal technology.