Saturday, February 18, 2017

Is Google Deep Mind Exhibiting Greed and Aggression?

theantimedia |  Will artificial intelligence get more aggressive and selfish the more intelligent it becomes? A new report out of Google’s DeepMind AI division suggests this is possible based on the outcome of millions of video game sessions it monitored. The results of the two games indicate that as artificial intelligence becomes more complex, it is more likely to take extreme measures to ensure victory, including sabotage and greed.

The first game, Gathering, is a simple one that involves gathering digital fruit. Two DeepMind AI agents were pitted against each other after being trained in the ways of deep reinforcement learning. After 40 million turns, the researchers began to notice something curious. Everything was ok as long as there were enough apples, but when scarcity set in, the agents used their laser beams to knock each other out and seize all the apples.

The aggression, they determined, was the result of higher levels of complexity in the AI agents themselves. When they tested the game on less intelligent AI agents, they found that the laser beams were left unused and equal amounts of apples were gathered. The simpler AIs seemed to naturally gravitate toward peaceful coexistence.

Researchers believe the more advanced AI agents learn from their environment and figure out how to use available resources to manipulate their situation — and they do it aggressively if they need to.

“This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning,” a DeepMind team member, Joel Z Leibo, told Wired.