Single Algorithm Learns to Play Many Computer Games

Researchers at DeepMind, a London-based company owned by Google, have created a computer algorithm that can not only learn to play up to 49 different arcade games, but also beat professional human gamers.  This is the first instance in which an artificial intelligence (AI) system can learn many different tasks without having to be updated between them.

Siddiqui Image

Space Invaders is one of the many games that DeepMind’s algorithm has learned to master.

Previous AI systems could only master one game, like IBM’s Deep Blue with chess.  With the varied ability of this new algorithm, researchers at DeepMind hope that their current AI system will help neuroscientists model human intelligence.  The arcade games that the algorithm plays represent simplified versions of the type of problems that the brain processes.

Although the algorithm deviates greatly from practical biological application and neurons, some of the computations and principles that drive machine learning come from a few of the systems that underlie human learning.  As the technology and complexity of tasks increases, AI systems can provide new insights into how the human brain learns and processes information at the macroscopic level.

The DeepMind algorithm combines two aspects of machine learning that are analogous to components of human learning.  First, the algorithm uses deep learning, whereby artificial neuron-like layers are capable of learning through experience.  It also uses reinforcement learning; the algorithm learns to adopt the most rewarding actions based on trial and error. This second mechanism is similar to that of the brain’s dopamine reward system.

Given this interrelationship between technology and biology, further neuroscience research will provide the basis for the next steps in improving this and other algorithms. For instance, although DeepMind’s algorithm can learn to master various games, it has to start from scratch to learn each one individually.  Researchers hope to incorporate something similar to brain-based memory so that some aspects of learning can transfer from game to game.  They also hope to program the algorithm to consider the long term consequences of its in-game actions.

While neuroscience and game theory help drive the development of these AI systems, companies like Google hope to use them for business more so than gaming.  Similar systems are already in use for photograph classification, but more advanced systems could help Google improve translation, ad placement, and news headline arrangement.


1. Gibney E (25 Feb 2015) Game-playing Software Holds Lessons for Neuroscience. Nature Publishing Group.  Retrieved from

Bookmark the permalink.

Leave a Reply

Your email address will not be published.