[ad_1]
This game, released in 1999, is one of the most respected FPS games, which he greatly influenced.
This is a three-dimensional shooter in which players must team up for accomplish different goals (eliminating the opposing team, capturing a flag and bringing it back to base, etc.)
Quake III Arena has grown significantly thanks to the now marginal competitive scene, which has offered very high performance for many years
Unique and complex challenges
This type of game forces artificial intelligence to learn complex concepts for machines, such as spatial orientation, the work of team and adaptation to unforeseen situations.
For her experience, DeepMind asked her "FTW agents" (that's how she calls her algorithms) to compete in the flag-catching game mode in arenas randomized before each game.
The algorithms were also programmed to rely on their "sense of sight" to orient themselves and identify other players and flags, instead of having access to their exact coordinates at all times
Deepmind did not instruct his FTW agents what they should do, but instead used the reinforcement learning method to teach them how the game works.
This type of learning automatic rewards an autonomous algorithm when it adopts the desired behaviors. In this case, the FTW agents were rewarded for winning a game, forcing them to figure out the rules of the game and learn the best strategies.
Humans Can not Win
After training his algorithms in 450,000 games, DeepMind invited 40 people to a tournament where they had to play as a team or against FTW agents. Conclusion of the experiment: the robots constantly perform better than the humans
Moreover, the participants indicated that the algorithms collaborated better than humans, a non-negligible advantage in this team game.
This research DeepMind's goal is to better understand how many independent agents can work together to achieve a common goal.
"Billions of people live on our planet and everyone has their own goals and actions, but we still manage to come together through teams, organizations and companies in impressive demonstrations of collective intelligence, write the researchers. We call this multi-agent learning: many individual agents must act independently while learning to interact and cooperate with other agents.
Deepmind, a subsidiary of Alphabet (parent company of Google), is known for its artificial intelligence AlphaGo, which beat the world champion go game in 2016. A subsequent version of this system, called AlphaZero, is also considered the best in the world for go, chess and shogi (Japanese chess).
Deepmind's breakthrough comes a week after another research laboratory, OpenAI, reported that its algorithms had reached the same level as professional players in the multiplayer game Dota 2.
Source link