[ad_1]
One could say that Waymo, the autonomous subsidiary of Alphabet, owns the safest autonomous cars of the moment. It is certainly the longest journey. But in recent years, serious accidents involving early Uber and Tesla systems have eroded public confidence in the emerging technology. To recover it, it is not enough to travel kilometers on real roads.
So today, Waymo has not only announced that its vehicles have passed more than 10 million kilometers since 2009. It also revealed that its software now travels the same distance in a simulated version of the real world every 24 hours, this which equates to 25,000 cars. / 7. Waymo has logged more than 6 billion virtual miles in total.
This virtual test track is extremely important to Waymo's efforts to demonstrate that its cars are safe, says Dmitri Dolgov, CTO of the company. It enables engineers to test the latest software updates across a wide range of new scenarios, including unseen situations. It also makes it possible to test scenarios that are too risky to put in place, like other vehicles driving in a careless way at high speed.
"Suppose you test a scenario in which a jaywalker jumps out of a vehicle," says Dolgov. "At some point, it becomes dangerous to test it in the real world. It's where the simulator is incredibly powerful. "
Unlike human drivers, autonomous cars rely on training data rather than actual knowledge of the world. They can therefore easily be confused with unknown scenarios.
But it is not easy to test and prove complex machine learning systems whose behavior can be difficult to predict (see "The Dark Secret at the Heart of Artificial Intelligence"). Letting the cars collect large amounts of training data that can be used in a virtual world enables these systems to be formed.
"The question is whether simulation-based tests actually contain all the difficult cases that make driving difficult," says Ramanarayan Vasudevan, an assistant professor at the University of Michigan, specializing in autonomous vehicle simulation.
To explore as many of these rare cases as possible, the Waymo team uses an approach called "fuzzing", a term borrowed from computer security. Fuzzing involves running the same simulation while adding random variations each time, to see if these disturbances could cause accidents or make things brittle. Waymo has also developed software to ensure that vehicles do not derogate too much from the comfortable behavior of the simulation, for example by braking too violently.
In addition to analyzing real and simulated driving data, Waymo tries to trip cars by developing possible driving scenarios. On a test track at Castle Air Force Base in central California, testers throw all sorts of things on cars to confuse them: all, people crossing the road dressed in costumes. Halloween objects falling from the back of trucks. His engineers also tried to cut the power lines of the main control system to ensure that the fallback would take place properly.
Waymo is making progress. In October of last year, the company became the first company to remove security drivers from some of its vehicles. About 400 people in Phoenix, Arizona, use these truly autonomous robo-taxis for their daily commutes.
Phoenix is a fairly simple environment for autonomous vehicles. Installing in less temperate and chaotic locations, such as downtown Boston in a snowstorm, will be a significant advance for the technology.
"I would say that the deployment of Waymo in Phoenix looks more like Sputnik rather than driving independently in Michigan or San Francisco, which I think would be closer to an Apollo mission, "says Vasudevan.
The situation Facing Waymo and other automakers is actually a stark reminder of the big gap that still exists between real intelligence and artificial intelligence. Without billions of kilometers of real and virtual testing, or a deeper level of intelligence, autonomous cars can still stumble when they encounter something unexpected. And companies like Waymo can not afford that kind of uncertainty.
Source link