Artificial intelligence learns when taught
Doom, a computer game popular in the 1990s, is experiencing a second coming: the international VizDoom competition set out to find out whether artificial intelligence, AI, can play the game based on visual input alone. Anssi Kanervisto, an MSc student from the UEF School of Computing, won third place with his aptly-named agent TUHO, or doom in Finnish.
The competition was tough, and the first prize went to a team of IT professionals from Intel Labs. The second place was claimed by a team of postgraduate students from the renowned Carnegie Mellon University.
“The idea of the competition is to use artificial intelligence in a video game – in other words to code a program that can play the game and learn new things,” says Anssi Kanervisto, better known in gaming circles as “Miffyli”.
“We had to create a program that could play the game based on visual input alone, without any pre-existing background information. That turned out to be surprisingly complicated. Each player was given the same visual input, but no maps were provided. The agent needed to be able to play the game under all circumstances.”
“Earlier, gaming was recorded from a bird’s eye perspective one-dimensionally, but now the 3D perspective was added to the requirements. Normally, AI uses data directly from the game, and not visual images like us humans. This 2D image is the new challenge in VizDoom.”
According to Kanervisto’s supervisor, Senior Researcher Ville Hautamäki, the idea was to create an AI agent that seeks to learn the game better than any human.
“Earlier, the agent was programmed to turn in the direction of the enemy, but now we’ve taught it to do the same upon seeing the enemy. In other words, we are talking about machine learning and pattern recognition,” Hautamäki explains.
“AI agents used to be programmed to utilise the game’s internal data structures, meaning that they had access to more information than humans playing the game. Moreover, these agents weren’t really taught anything, as programmers would just come up with a set of rules for the agent to follow. Now, however, we’ve taught the agent by making it play the game in a similar way to humans.”
Kanervisto taught his TUHO agent to run in the field as fast as possible, while avoiding bumping into walls, and to shoot enemies on sight.
“The programming language used for coding AI is Python. A neural network based on statistics and algorithms teaches the agent to solve problems and to learn new tricks,” Kanervisto explains.
Carnegie Mellon University's team used a neural network not only to teach their agent how to navigate uncharted terrains, but also to shoot things. The system developed by Intel Labs, on the other hand, is very different. It predicts the future and chooses actions that in its mind lead to a “good” future.
“The idea is the same as in my TUHO. They had just been smarter about the execution, taught the agent better and made all kinds of correct fine-tunings.”
Kanervisto says that developing an agent for the game is a small step in the direction of human-like artificial intelligence.
“A bigger goal is to develop an agent that can outperform us humans. At this point, however, agents’ activities don’t look very intelligent on the outside. We humans have a headstart of millions of years, thanks to evolution,” Kanervisto says.
“In robotics, devices can be fully remote controlled. The idea of the VizDoom competition, however, was to fully replace remote control. If we want a robotic car to drive itself in environments that don’t have a clear structure as roads do, for example, we need artificial intelligence that is able to learn new things,” Hautamäki says.
According to Kanervisto, sensors that analyse the environment could utilise images and raw data in the future, and AI could also learn from this data.
“In reality though, we could be witnessing cleaning robots that hide dust under the carpet simply because the carpet is closer than the dust bin.”
Hautamäki and Kanervisto say that artificial intelligence still has a long way to go in video games, let alone in real life. However, AI can play several games simultaneously and it can make days’ worth of progress in a short time.
“It would make sense to test an independently moving shopping robot in a game rather than to put it out in the real world where it could easily get broken,” Kanervisto says.
One of the next big steps in the development of artificial intelligence is natural conversation between humans and AI. This development rests on long-term research in speech technology, dating back nearly 15 years first at the University of Joensuu and later at the University of Eastern Finland.
“Google’s DeepMind is already developing these further with the help of video games,” Kanervisto says.
Scientists believe that artificial intelligence will find significant new uses in the near future.
Could AI make better diagnoses than a real doctor, for example? After all, artificial intelligence could access large amounts of data and come up with a statistically most probable diagnosis.
“Artificial intelligence could be a handy assistant at the very least,” Hautamäki says.
“Artificial intelligence will both create and take jobs away. One change could be the extinction of human-manned help desks. Artificial intelligence could also be used in environmental monitoring and space exploration,” Kanervisto envisions.
“Moreover, a cleaning robot can learn that it’s not a good idea to hide dust under the carpet. Having said that, the robot won’t understand why it is a bad idea, and it’s not going to understand it any time soon.
Scientists agree that we will be witnessing robots operating in our society during this lifetime. However, they may not develop in the direction of having a consciousness very quickly, as many ethical issues need to be resolved first.
Compared to humans, artificial intelligence is low maintenance in that it doesn’t need any rewards to strengthen its motivation or commitment.
In fact, artificial intelligence can be rewarded with numbers: the bigger the number, the more satisfied the AI.