OpenAI AI Dota wins OG pro team as first AI to defeat world champions



[ad_1]

Professional players, beware: for the very first time, a team of e-sports world champion has been beaten by an artificial intelligence team.

In a series of live competitions between reigning world champion team OG and the five-man OpenAI Five team, the AI ​​has won two consecutive games, winning the best of three tournament. With 45,000 years of practice at Dota 2 to his credit, the system seemed unstoppable: he skilfully steered his strategic decisions and argued his benefits with a surprisingly good judgment.

This is an important step towards which the world's largest AI labs have been frantically busy in recent years. Online games provide an opportunity to showcase skills in strategic decision-making, coordination and long-term planning of their creations. With powerful new techniques, AI can do things considered almost impossible less than a decade ago.

Combat strategy games are a great way to showcase everything the AI ​​can do.

Dota 2 is a multiplayer online battle arena game, a style of strategy game in which players coordinate to achieve strategic goals – like destroying or conquering enemy towers, ambushing enemy units and improve and strengthen their own defenses.

In Dota 2, each team has five players, each running their own "hero" with unique abilities. It's a complex game with computer-controlled units on both sides, over 100 potential "heroes" with different abilities, computer-controlled neutral units, an in-game economy, and a large card in which players fight. to destroy the enemy base (their "Ancient") while protecting theirs.

OpenAI Five plays a simplified version of the game, with a limited subset of its heroes and without a few features such as invocations (where the player creates and controls additional units) and illusions (where the player can create copies of him -even). The OpenAI researchers I spoke to remarked that excluding invocations and illusions actually helps humans. Controlling the detailed movements of a large number of units is the kind of thing for which AIs are very talented.

Within the simplified boundaries of the game, OpenAI Five has been a staggering triumph. To evaluate the performance of an artificial intelligence system in a strategy game, you need to know if it is just winning with what players call "micro" – pre-second positioning and attack skills where the reflexes of a computer are a considerable advantage.

OpenAI Five had a good microphone, but it also worked well so that the human players, now that they saw it, could choose to imitate, suggesting that it did not succeed only thanks to superior reflexes. Commentators who watched the game, for example, criticized OpenAI Five's eagerness to return to the game when his heroes died, but the tactic was confirmed – perhaps suggesting that professionals should be a little more willing to pay. to join the field.

And OpenAI had a deeper strategic understanding of the board than human commentators. When the commentators said the game was tied, OpenAI said it had a 90% chance of winning. (It turns out that the soberly announced probability estimates are an excellent speech for garbage, and these statements have often shaken their opponents. For us, the game may seem open, but for the computer, it was obviously almost finished.

Of course, it's still an example of computer exploitation that exploits the skills for which computers are good – like making accurate forecasts and tracking a lot of information about the world. But these are skills with a much broader applicability than fast reflexes and good timing of attack, so it is definitely more impressive to see them demonstrated.

AIs capture new capabilities at a breathtaking pace

In 2016, when the all-new non-profit organization, AI, founded by Elon-Musk, announced that she was going to learn how to use a computer to play DotA, she promised something that no one had still done before. Artificial intelligence systems were able to offer interesting solutions: voice recognition progressed rapidly, AlphaGo had won 4 out of 5 matches with one of the best Go players and companies were optimistic that they could progress on difficult problems such as autonomous vehicles and translation.

But playing complex strategy games and the best professionals exceeded them. OpenAI had to start with very simplified versions of the game – 1 player against 1 instead of 5 against 5, only a handful of available heroes, major elements of the game removed for simplicity.

And artificial intelligence was still not so good. Last year again, he had lost his exhibition games at the DotA The International tournament.

But AI has continued to grow at a fast pace and our understanding of what it can do is constantly changing. The triumph of the OpenAI competition against a professional Dota 2 team is not even the first event of its kind this year; In January, their competitor DeepMind launched a robot that competes with the pros at Starcraft, which won their 10-1 matches.

Anyone who is prone to cynicism about these advances still has reasons to be unimpressed. OpenAI Five plays with only 17 of the game's 115 heroes and restricts some of the game's editing abilities. DeepMind's AlphaStar skeptics have observed that the computer, despite its limited bitrate, still wins with a microphone that a human could not not compete. And OpenAI put 45,000 years of play at Dota 2 to reach its current level of skills – so that humans are always learning faster.

But it's impossible to deny that AIs are doing what it would have been unimaginable just a few years ago.

OpenAI wants to tell us that AI is our ally, not our enemy

Competition games are an excellent environment to show what the AI ​​can achieve. But there is a disadvantage in showing the whole world what artificial intelligence can only achieve through exhibition matches, during which it crushes humans – it gives the impression that artificial intelligence is our enemy which progresses regularly. OpenAI argues that, far from it, AI should be considered as a resource for humans.

To this end, the team has invited me to do a demonstration of a new OpenAI Five feature – where human players play the game next to AI robots, named "Friend 1", "Friend 2", "Friend 3" and "Friend 4". While I awkwardly moved my dragon on the screen – I am very far from a professional Dota player – my teammates rushed to come to my rescue in ambushes. (The public will be able to try this in a few weeks via OpenAI Arena.)

A little later, during the public cooperation match, humans were sometimes impressed and sometimes frustrated by their AI allies. As promised, it was a different experience than the potential of Amnesty International.

That's what OpenAI researchers want. The team hopes that as artificial intelligence becomes more efficient, it will be used to help the human decision process – its probability estimates help us interpret medical analysis, its modeling capabilities help us understand the refolding process. proteins in order to develop new drugs.

Some people might wonder, if we want AI to be a friend ally for the betterment of the world, it is good to teach AI to conquer and to kill his enemies during war strategy games.

This is not as misguided as it may seem. Reinforcement learning teaches these AIs a "reward function" – an image of rewarded actions around the world. They learn, by practice, to maximize them. AIs do not learn the general concept of "conquest" and "kill", but only what actions increase their chances of winning.

The techniques used to form systems like OpenAI Five and AlphaStar are powerful and generalizable, but the reward functions themselves are very specific and will not be a source of inspiration for Skynet. There is really nothing to fear from OpenAI Five, except perhaps what it augurs about the pace of AI progress.

With regard to the progress of AI in general, however, many things make researchers think. Many experts believe that as artificial intelligence systems become more powerful, we open ourselves up to potentially life-threatening errors. We could design AI systems whose objectives do not exactly reflect what we want or systems that are vulnerable to external attackers. If these errors occur with moderately powerful systems, they could cause stock market crashes, network failures, and costly accidents. If they happen with extremely powerful systems, the effects could be much worse.

These are, without a doubt, all the problems that we can solve over time. Artificial intelligence is progressing so rapidly that some policy analysts are worried that they have not spent enough time on security and strategy planning if powerful systems are deployed.

In an interview last week, Greg Brockman, CTO of OpenAI, compared the ways in which AI will transform society to the Internet. And honestly, this change has always been too fast. You look at recent events and – it would be nice if we had spent more time understanding how it would affect us. "

But the AI, he notes, will transform the world much more quickly. It's been eight months since OpenAI Five ran into trouble at the Dota The International contest. Now it's almost unbeatable.

"It hurts, we are doomed," Olympic Games player Johan Sundstein said after the second defeat. he added"I just hope that they remember how nice and mannered we were once they own the planet."


Sign up for the Future Perfect newsletter. Twice a week, you will have an overview of ideas and solutions to our greatest challenges: improving public health, reducing human and animal suffering, mitigating catastrophic risks and, to put it simply, improving the quality of things.

[ad_2]

Source link