Google's DeepMind goes under cover to fight players



[ad_1]

Starcraft II

Copyright of the image
Blizzard

Legend

AlphaStar can be one of three Starcraft II races

European players are invited to take on a bot developed by some of the world's leading artificial intelligence researchers.

But there is a difference: the players will not be warned when they have clashed.

The tests are being done by DeepMind, the London-based IT company that previously created a program that beat the best Go players in the world.

In this case, the challenge is the science fiction video game Starcraft II.

This task is considered more complex because players can only get a partial insight into what their opponent does, unlike the Chinese Go game where all the pieces are presented.

In addition, both Starcraft players move their armies simultaneously rather than in turn.

DeepMind, which belongs to Google's parent company, Alphabet, said its AlphaStar robot was playing anonymously in order to get as close as possible to a normal match situation. The problem is that if people knew that they were playing against a computer, they could play differently.

But players will only be confronted with the algorithm controlled system that they first chose to be part of the experiment.

Copyright of the image
Blizzard

Legend

DeepMind does not say when or how often it will deploy its AlphaStar agent against human players

If they lose, they risk losing their Matching Rating (MMR) score, which will reduce their ranking relative to other players and will affect their likelihood of being promoted to the top leagues.

One of the major UK players said that the Starcraft community was very interested in the performance of AlphaStar.

"It's a game of hidden information and decision-making with very limited knowledge," said Raza Sekha of Kent.

"People are very curious about whether DeepMind will innovate and propose new strategic ideas.

"It would be a great success, but I do not think many people expect it to happen."

AlphaStar's predecessors, however, have developed creative strategies in chess games, Go and Shogi, which in turn have prompted some of the world's greatest human players to change tactics.

Reinforcement learning

This is not the first time that artificial intelligence researchers have sought to advance the field via video games.

Last year, San Francisco-based OpenAI announced a breakthrough when it actually created a "curious" agent to achieve high scores within Montezuma's Revenge.

Copyright of the image
OpenAI

Legend

Despite being an old video game, researchers had a hard time teaching AI agents to explore the premises of Montezuma Revenge

A range of machine learning experiences has also been realized within Minecraft, thanks to Microsoft's development of a special version of its block building game.

And DeepMind itself has become known by developing agents who have learned to play dozens of Atari games, including Breakout and Space Invaders. More recently, he has created software that plays alongside human teammates at Quake III Arena.

These virtual, ready-to-use environments are one way to carry out a process called reinforcement learning. This implies that agents discover ways to improve their performance by themselves through a trial and error process, receiving "rewards" for success rather than being told what to do .

In some cases, agents learn from nothing. But in the case of AlphaStar, he was first trained to imitate the human game by referring to previous matches, before being launched against other versions of himself to further improve its performance.

AI disabled

The progress of AlphaStar has not been without controversy.

Some players felt that this presented an unfair advantage in previous matches because it could display the entire card one game at a time, taking more details than a human.

"As a human being, one of the most difficult parts of the game is multitasking," said Sekha.

"It's really hard to divide your attention between two places.

Copyright of the image
Deepmind

Legend

DeepMind intends to publish replicas of the games against the man when publishing his research

"Thus, an AI has a crucial advantage when it can see everywhere at once, as it allows it to attack and defend itself almost at the same time, while a human should choose it." Better to do one or the other. "

To solve this problem, the agent has been modified to use the game map more like humans do. He must now zoom in on a section to determine the action inside and can only move units to visible locations.

DeepMind has also reduced the number of actions that AlphaStar can take up to the minute to meet other critics.

But Mr. Sekha said that questions remained unanswered.

"It's possible to move very quickly from one camera to another, much faster than a human, it would still be a little unfair," he said.

"So it will be really interesting to see what steps they have taken to level the playing field, because the last time the community felt that it was a bit too much for artificial intelligence."

DeepMind intends to share more details about the project as part of a scientific research paper, but has not yet determined when it will be released.

[ad_2]

Source link