DeepMind AI can play Quake III Arena capture the flag better than a human



[ad_1]

DeepMind, part of the Alphabet Group after being acquired by Google in 2014, focuses on artificial intelligence. On Tuesday, he announced that he had learned AI programs to learn how to effectively team up with other AI programs as well as with humans in a video game. at the human level. Company researchers have taught an AI agent to play a customized version of Quake III Arena as a human. Interestingly, the trained AI is now better than most human players in the game, even when playing with a human. Trained IA officers have been found to have a higher success rate (Elo rating) and to be more collaborative than humans. With such artificial intelligence-based research, the video game space expands the scope so that NPCs or non-player characters can do better and work more efficiently.

According to the results of research and experiments shared by DeepMind, several AI systems have been formed. to play 'Capture the Flag & # 39; on Quake III Arena, which is a first-person multiplayer shooter. An AI agent based on the architecture For the Win (FTW) was brought to play about 450,000 rounds of the game to finally win his dominance over most human players and also to understand how to work effectively with d & # 3939; other machines and humans. Teamwork training, according to DeepMind, calls multi-agent learning

Mastery of strategies, tactical understanding and team play involved in multiplayer video games is a important element of research on AI. DeepMind, in the blog, says, "We train agents who learn and act as individuals, but who must be able to play as a team with and against any other agent, artificial or human. He adds, "From a multi-agent perspective, the FCE requires players to successfully cooperate with their teammates and compete with the opposing team, while remaining robust to any style of play they may encounter." .

According to DeepMind, the Arena III experience has shown that the game has laid the foundation for several first-person video games and has also 'drawn a sports scene long-standing competitive electronics.

Such experiments, says DeepMind, are based on three general ideas for reinforcement learning. First, instead of training a single agent, the researchers formed a population of agents. Each agent then learns his own internal reward signal that allows him to generate his own internal goals, such as catching a flag. Finally, these agents operate at two time scales – fast and slow – and improve their ability to use memory and generate consistent action sequences.

In a tournament with and against 40 human players, teams win against exclusively human teams. Interestingly, he also had a very good chance of winning against man-machine teams. A survey of human participants revealed that the FTW agent was more collaborative than his human teammates. What's even more interesting is that the rules of the game have not been given to the machines before, but over time, FTW learned most of the basic strategies "at a very high level . "

Before, each new game offers new challenges to be solved by AI. As a reminder, last year DeepMind made the headlines by creating an AlphaGo Zero AI system that defeated the Go game world champion. Recently, OpenAI announced that it will host a Pro team. Dota 2 at The International, one of the most popular video game tournaments in the world. In addition, last year, DeepMind had released a set of tools with Blizzard Entertainment to speed up AI research with the help of the Starcraft real-time strategy game.

<! –

->

[ad_2]
Source link