Whether it’s Luigi racing past you and dropping a banana or a guard coming to investigate a noise, you’re likely to find some level of artificial intelligence embedded within each and every modern game. Through AI’s capability to generate exciting, emergent gameplay scenarios, it has become a part of gaming we can’t do without.
Well, no. In-game AI rarely, if ever, aligns with the ideals of true artificial intelligence. This means few video game AI would pass the Turing test or be considered by those in the field of artificial intelligence or machine learning as truly intelligent systems.
Instead, game AI refers to a broad set of algorithms that govern character (or system) behavior to enhance or bring to life a gaming experience. These behaviors are often arranged in a scripted decision tree, which provides the AI with many options to choose from at any one moment with clear parameters and pre-requisites for when, for how long and how many times each behavior or script can be undertaken.
As these decision trees are usually handwritten by developers, they often result in “artificial stupidity”. This manifests as repetitive, predictable player behavior where the AI will often react in seemingly illogical or abnormal ways.
But beyond behaviors, AI is also used for pathfinding—that is moving non-player characters around the game’s world while taking in consideration the terrain and obstacles.
Despite how far in-game AI has come since its inception due to its reliance on remaining handwritten—at least for now—it’s still not quite truly intelligent.
The earliest forms of game AI come from more traditional games. One of the very earliest was a computerized version of the mathematical game Nim created in 1951 and published in 1952. Even this extremely early form of AI, the computer was said to have beaten even highly skilled players of Nim.
After this, AI for Checkers and Chess quickly followed culminating in the legendary defeat of Chess Master Garry Kasparov by IBM’s computer known as Deep Blue, in 1997.
Games like Pong featured rudimentary AI which could easily rival player skill, with other titles from the 1970s such as Speed Race, Qwak (a duck hunting title) and Pursuit (a dogfighting simulator) showing increasingly complex AI.
Jumping forward to the 1990s, the use of formal AI tools like finite state machines—a more complex multi-layered computational model than what came before—became commonplace. Real-time strategy games, in particular, relied on such complex computation systems to ensure a player’s opponent would be able to manage, control and compute countless objects, incomplete information, pathfinding problems, real-time decisions and even economic planning.
After these complex computational systems were mastered (forming the basis of the AI we know today), bottom-up AI methods started to be utilized. These included the capacity for emergent behavior and a more complex evaluation of player actions, which provided grounds for increased complexity in AI, thus raising the ceiling for player-computer interactions.
Alongside these bottom-up methods, the Monte Carlo Tree Search Method (MCTSM) was created. This method helped to introduce more variation into a computer’s wish to be incredibly efficient. In practice, this results in the AI essentially playing tic-tac-toe with its options, yielding a result that is somewhat less predictable for the player. This can include both AI action and pathfinding behaviors.
Of course, Video Game AI is a phenomenally complex topic that dives into the realm of complex programming, scripting and coding which must be explained by seasoned professionals. That said, I hope this article gave you a brief overview of the trajectory of video game AI, and how it developed from an innocent Nim-winning machine to the complex and multifaceted (if still somewhat flawed) AI we see in modern titles.
To outwit the AI in the game you’re currently playing against, consider visiting Eldorado.gg—the trusted marketplace for buying and selling in-game goods: