artificial intelligence computer science Features

Peering inside the black box of AI

Five years ago, computer scientist Alan Fern and his colleagues at Oregon State University, in Corvallis, set out to answer one of the most pressing questions in artificial intelligence (AI): Could they understand how AI reasons, makes decisions, and chooses its actions?

The proving ground for their exploration was five classic arcade games, circa 1980, including Pong and Space Invaders. The player was a computer program that didn’t follow predetermined rules, but learned by trial and error, playing those games over and over to develop internal rules for winning, based on its past mistakes (1). These rules, because they’re generated by the algorithm, can run counter to human intuition and be difficult, if not impossible, to decipher.Fern and his colleagues were optimistic—at first. They developed methods to understand the focus of their game-playing AI’s visual attention. But they found it nearly impossible to confidently decode its winning strategies

Fern’s group was one of 11 that had joined an Explainable AI, or XAI, project funded by the Defense Advanced Research Projects Agency (DARPA), the research and development arm of the US Department of Defense (2). The program, which ended in 2021, was driven in large part by military applications. In principle, research findings would help members of the military understand, effectively manage, debug, and, importantly, trust an “emerging generation of artificially intelligent machine partners,” according to its stated mission.

Read more at PNAS, here.