Tesla Autonomy Event: Impressive Progress in an Unrealistic Timeline



[ad_1]

Tesla Autonomy Event: Impressive Progress in an Unrealistic Timeline

Getty / Aurich Lawson / Tesla

There is an old joke in the world of software engineering, sometimes attributed to Bell Labs' Tom Cargill: "The first 90% of the code represents the first 90% of development time, the remaining 10% is the remaining 90%." percent of development time ".

On Monday, Tesla hosted a major event to showcase the company's impressive progress in autonomous driving technology. The company has introduced a new neural network computer that seems to compete with the industry leader, Nvidia. And Tesla explained how she's leveraging her vast fleet of customer-owned vehicles to collect data that helps the company form its neural networks.

The big message from Elon Musk was that Tesla was about to reach the Holy Grail of totally autonomous cars. Musk predicts that by the end of the year, Tesla cars will be able to navigate the streets and surface highways, allowing them to drive between any two points without human intervention.

At this point, the cars will be "complete", according to Musk's terminology, but will still need a human driver to monitor the vehicle and intervene in case of a malfunction. But Musk predicts that it will only take about six months for the software to become sufficiently reliable to no longer require human supervision. Musk expects Tesla to have thousands of vehicles providing driverless trips to people using an Uber style taxi service.

In other words, Musk seems to believe that once Tesla's cars are "fully functional" later in the year, they will have 90% of the way to full autonomy. The big question is whether this is really true or if it is true only in the Cargill sense.

Two stages of development of a self-driving car

Waymo engineers represent situations in the road using complex diagrams like this one. "Src =" https://cdn.arstechnica.net/wp-content/uploads/2017/10/how-googles-self-driving-cars-see-the-world -980x735.jpg "width =" 980 "height = "735
Enlarge / Waymo engineers represent situations in the road using complex diagrams like this one.

You can imagine that the development of an autonomous car takes place in two stages. The first step is to develop a static understanding of the world. Where is the road? Where are the other cars? Are there pedestrians or bikes nearby? What is the code of the road in this region?

Once the software has mastered this part of the self-driving task, it should be able to drive perfectly between any two points on empty roads – and to be able to avoid the risk of hitting objects even on roads. congested. This is the level of autonomy that Musk has termed "complete functionality". Waymo has reached this level of independence by 2015, while Tesla wants to reach it later this year.

However, the construction of a car that can be used as a driverless taxi requires a second stage of development, focused on mastering complex interactions with other drivers, pedestrians and other road users. Without this control, an autonomous car will often be blocked by indecision. It will be difficult to integrate on crowded highways, navigate roundabouts and turn left unprotected. It might be impossible to go forward in areas frequented by many pedestrians, lest someone jump in front of the car. It will have no idea what to do near construction sites or busy parking lots.

A car like this could get you to your destination, but it could be a journey that is so slow and unpredictable that no one would want to use it. And his clumsy driving style could drive other road users crazy and discourage the public from using autonomous driving technology.

In this second stage, a company must also manage a "long tail" of increasingly unusual situations: a car that travels the wrong way on a one-way road; a truck that loses traction on an icy road and slips back toward your vehicle; a forest fire, flood or tornado makes a road impassable. Some events may be rare enough to allow a company to test its software for years and never see it.

Waymo has spent the last three years in the second phase of autonomous development. On the other hand, Elon Musk seems to consider it trivial. It seems to believe that once Tesla cars recognize the marking of lanes and other objects on the road, they will be almost ready for driverless use.

Tesla's new self-driving chip

A self-driven Tesla prototype using Nvidia Drive PX 2 AI technology. "Src =" https://cdn.arstechnica.net/wp-content/uploads/2019/04/nvidia-self-drive-tesla-980x551. jpg "width =" 980 "height =" 551
Enlarge / A self-driven Tesla prototype using Nvidia Drive PX 2 AI technology.

Nvidia

Over the past decade, researchers have found that the performance of neural networks continues to improve through a combination of deeper networks, more data, and increased computing power. The first deep learning experiments were conducted using the parallel processing power of mainstream GPUs. More recently, companies such as Google and Nvidia have begun designing custom chips specifically designed for in-depth learning workloads.

Since 2016, Autopilot has been powered by Nvidia Drive PX hardware. But last year, we learned that Tesla was getting rid of Nvidia for the benefit of a custom designed chip. Monday's event served as a feast for the launch of this chip, officially known as the full standalone computer.

Musk invited Pete Bannon, a chip designer Tesla hired from Apple in 2016, to explain his work. Bannon said the new system is designed to be an immediate replacement of the previous system, based on Nvidia.

"These are two independent computers that start up and run their own operating systems," Bannon said. Each computer will have an independent power source. If one of the computers goes down, the car will continue to drive.

According to Mr. Bannon, each self-driving chip contains 6 billion transistors, and the system is designed to perform a handful of operations used by neural networks in a very parallel manner. Each chip has two computational engines capable of performing 9 216 multiplicative operations (heart of neural network calculations) at each clock cycle. Each system with Full Self-Driving technology will have two of these chips, giving a total computing capacity of 144 trillion operations per second.

Tesla says this represents a 21-fold improvement over the Nvidia chips that the company used previously. Of course, Nvidia has been producing new chips since 2016, but Tesla says its chips are more powerful than the current Drive Xavier chip, which is 144 TOPS versus 21 TOPS.

But Nvidia argues that this comparison is not right. The company claims that its Xavier chip provides 30 TOPS, not 21. More importantly, Nvidia says that it usually bundles Xavier on a chip with a powerful GPU chip, generating a computing power of 160 TOPS. And like Tesla, Nvidia groups these systems in pairs for redundancy, producing a global system with 320 TOPS of computing power.

Of course, what really matters is not the number of theoretical operations that a system can perform, but its performance on the actual workloads. Tesla claims that its chips are specifically designed for high performance and low power consumption for stand-alone applications, which could give better performance than Nvidia's more general chips. Be that as it may, both companies are working on new generation designs, so any benefit they get will probably be fleeting.

[ad_2]

Source link