Artificial Intelligence can now beat all 57 Atari 2600 games. Alphabet subsidiary DeepMind has revealed that their Agent57 can beat humans on the classic 1977 console. This is pretty big news, but not unsurprising. A.I. has been heading this way for quite a while, especially after supercomputer AlphaGo won the final match against the best human Go player in the past decade, Lee Sedol, in 2016. According to DeepMind, "Agent57 combines an algorithm for efficient exploration with a meta-controller that adapts the exploration and long vs. short-term behavior of the agent."
In other words, Agent57 uses machine learning called deep reinforcement, which allows it to learn from mistakes and keep improving over time. There's footage of Agent57 playing the Alien Atari game and it's pretty remarkable to watch as the computer nearly dominates the video game throughout the 30-minute video. DeepMind released a research paper explaining why video games are such a good way to test A.I. You can read a portion of it below.
"Games are an excellent testing ground for building adaptive algorithms: they provide a rich suite of tasks which players must develop sophisticated behavioral strategies to master, but they also provide an easy progress metric - game score - to optimize against. The ultimate goal is not to develop systems that excel at games, but rather to use games as a stepping stone for developing systems that learn to excel at a broad set of challenges."
DeepMind uses the same machine learning for Agent57 that the aforementioned AlphaGo utilized to master Go. While other A.I. systems have been tackling Atari games for a while, Agent57 is outdoing them all and is well on its way to being able to beat humans at all of the games without any trouble. Montezuma's Revenge, Pitfall, Solaris, and Skiing are all games that A.I. struggles with due to the strategy involved, which is usually trouble for A.I. systems.
While most A.I. systems have struggled with more of the strategy-based games, DeepMind's Agent57 was able keep learning from mistakes. According to the research paper, the longer the system was able to run, the better the results, just like human learning. There are some drawbacks to this type of learning though, which is gone over in the research paper. The research goes on below.
"With Agent57, we have succeeded in building a more generally intelligent agent that has above-human performance on all tasks in the Atari57 benchmark. Agent57 was able to scale with increasing amounts of computation: the longer it trained, the higher its score got. While this enabled Agent57 to achieve strong general performance, it takes a lot of computation and time; the data efficiency can certainly be improved."
Computation and time will more than likely be what DeepMind starts working on next. Whatever the case may be, this is a pretty big breakthrough that will only get better in time, which is could be scary at the same time. Do we really need these computers to be thinking for themselves and beating humans at every single Atari game, including E.T.? You can check out Agent57 dominating the Alien game below, thanks to the DeepMind YouTube channel.