Debate continues about the potential for artificial intelligence to reach human-level intelligence. However, one area where it has shown remarkable prowess is mathematics. Recently, A.I. systems developed by Google and OpenAI successfully tackled five out of six advanced problems at the International Math Olympiad, an esteemed competition that features the brightest high school mathematicians from around the globe.
Yet, despite its achievements, A.I. still exhibits considerable gaps in what many would consider basic common sense. Anuradha Weeraman, a software engineer from Sri Lanka, experienced this firsthand when he posed a seemingly simple question to various chatbots. He asked if he should walk or drive to a repair shop just 50 meters away, only to receive the advice to walk.
This peculiar phenomenon—where A.I. can excel in certain domains and flounder in others—is referred to as "jagged intelligence" by researchers, engineers, and economists. This term sheds light on A.I.’s uneven performance, highlighting its ability to excel in fields like mathematics and programming while struggling in more straightforward situations.
This concept is gaining traction among experts who develop and study A.I. systems, as it reframes the discussion about their intelligence. Rather than competing with human capabilities across the board, researchers contend that A.I. operates differently: it can outperform humans in specific tasks while underperforming in others.