Reasons AI has yet to conquer a new video game

Reasons AI has yet to conquer a new video game
Summary
Current AI excels in specific games but struggles with unfamiliar ones due to brittleness.
Reinforcement learning systems overfit to trained scenarios, failing to generalize to new experiences.
True adaptability requires new AI architectures capable of learning games without prior exposure.

Share

Bookmark

Newsletter

For many years, video games have served as a critical platform for testing the capabilities of artificial intelligence. From the early days of checkers programs to advanced systems that have triumphed in chess and Go, it seemed that each achievement brought us closer to machines exhibiting human-like intelligence. However, a recent paper by Julian Togelius and his co-authors challenges this commonly held view. They assert that, despite some significant wins, today's AI still encounters a fundamental obstacle: the ability to play a game it has never encountered before.

Notable advancements in game AI typically revolve around systems specifically optimized for a single game. While these systems can outperform humans under specific conditions, their ability to adapt dwindles with even minor changes in the rules, visuals, or environment.

This inflexibility uncovers a significant drawback. Human intelligence encompasses more than mastering tasks; it includes the capacity to adapt to new situations. Video games, with their wide variety of mechanics and goals, present an excellent environment for testing this adaptability. The authors highlight that games challenge an extensive range of cognitive abilities, such as spatial reasoning, long-term planning, social understanding, and experiential learning. Yet, current AI systems struggle with such comprehensive challenges.

Reinforcement learning, a prevalent method driving many current successes, involves systems that learn through trial and error, refining their skills over millions or even billions of simulations. However, these systems often become overly specialized, excelling only in the precise scenarios they were trained on while failing to generalize effectively. Even slight changes, like color shifts or spatial adjustments, can drastically hinder a trained agent's performance.

In contrast, planning-based systems, which are often utilized in chess or Go, offer greater adaptability. These systems analyze potential moves and outcomes rather than relying solely on previous training. However, they rely on speedy and accurate simulations, a capability that most modern video games—and real-world scenarios—fail to deliver on a large scale.

Although large language models—crucial to today’s most visible AI applications—could appear to be a promising fit, their performance in unfamiliar games is surprisingly inadequate. Even in instances where they perform acceptably in well-known games, this is typically due to extensive, game-specific support structures. These systems require additional tools to interpret game states, manage memory, and carry out actions, and without this specialized infrastructure, their effectiveness diminishes significantly.

This discrepancy likely stems from the nature of their training data. Language models are built upon vast quantities of text, lacking experience with game states and actions. Consequently, they miss the embodied understanding and interactive engagement that come with gaming.

According to the authors, achieving genuine general gameplay abilities would necessitate an AI capable of learning a new game from the ground up in a time frame comparable to that of a skilled human—potentially several tens of hours—without leaning on prior knowledge or extensive simulations.

This threshold is far beyond the capabilities of current technology. Contemporary reinforcement learning models require significant amounts of data, while language models fail to possess the means to acquire and refine knowledge through prolonged interactions. Addressing this disparity will likely require innovative architectures and new learning paradigms.

The consequences of this research extend beyond gaming. The capacity to adapt to new contexts is a vital aspect of what is envisioned with artificial general intelligence (AGI). If an AI struggles to handle a novel video game—a relatively controlled and simplified environment—it is unlikely to manage the complexities of the real world.

Interestingly, the paper highlights one area where AI has demonstrated effectiveness: computer programming. The authors suggest that coding can be treated as a game with defined rules, specific objectives, and immediate feedback available through debugging and testing. Modern AI has become adept at this task due to extensive training on its structure and associated data.

However, outside of these structured situations, AI capabilities remain restrained.

In conclusion, the researchers advocate for maintaining games as a focal point for evaluating AI—not as isolated challenges but as part of a dynamic, evolving array of tests for adaptability and innovation. A truly intelligent system wouldn't just learn to efficiently play new games; it could potentially create captivating games of its own.

Loading comments...