Dr. AutoPilot and Mr. Autopilot: Why should we want the slow approach to gain from Waymo?



[ad_1]

If you've listened to Waymo CEO John Krafcik's comments At the Frankfurt Auto Show, you may have already noticed some subtle nuances at Tesla and at other big names in the automated driving space. Highlighting the depth of the unparalleled experience that the Alphabet Company has gained, and showing why its goal remains to master the challenges of Level 4 autonomy, it can be easy to feel that you have heard comments like those of Krafcik before. But with the benefits of the historical context, from which I drew some conclusions from the research of my book LUDICROUS: The Unvarnished Story of Tesla Motors, we can draw important lessons from Krafcik's speech.

At the time, no one knew how much Tesla was about to become an alphabet company. It was only after Ashlee Vance published Ashley Vance's authoritative biography of Elon Musk in 2015 that the public knew that Tesla CEO Elon Musk had negotiated an agreement with his friends and co-founders of Google. Larry Page and Sergey Brin, who would have seen Tesla bought by the search giant at a healthy premium and with Musk stay in charge. This generous deal was eventually rejected by Tesla after Musk staged a sales miracle, borrowed money to repay government loans, and launched a promotional campaign that sent Tesla shares into Ludicrous mode.

Part of Musk's promotional blitz, which began in the second quarter of 2013, was to talk about automated driving for the first time. Musk began by saying that Tesla could use Google 's technology to make his cars driverless, but as early as the second half of the year, he released Tesla' s system from the search system, thus releasing the camera. effort of the research giant.[moving] "To achieve this goal, Musk said the Tesla system would offer automated driving on" 90% of the kilometers traveled in three years, "claiming that the total autonomy was" a bridge too far away ".

In hindsight, it's clear that Musk was "at least inspired or frightened by his look behind the curtain at Google's surprisingly advanced standalone technology. But according to the latest information from Krafcik, Musk seems to have been more than inspired: Google had already thoroughly tested a system "driver in the loop" reserved for highways, called … "AutoPilot". According to the book's Larry Burns book, Google / Waymo consultant, "AutoPilot" [Burns doesn’t use this name] was developed until 2011, tested in 2012 and decided by the end of the year that it would not pursue the product.

In short, it seems that Musk had to take a look at (or maybe even a demonstration of AutoPilot) and decided that if Google would not put it on the market, it would do it, even in the internal name of the product. Although not everyone makes the same decision about a friend's company product, especially after this company has submitted an attractive rescue offer for its own company, it's not hard to understand why Musk did what he did. In Silicon Valley, a city obsessed with trends, automated driving would make Tesla's electric vehicle technology a reality. It was an entirely scaled and proven product that could bring Tesla back into the game and otherwise become an abandonware.

The problem, of course, was simply that Google had abandoned AutoPilot for good reason. The video of "pilots" test using AutoPilot, publicly presented for the first time in Frankfurt, shows the pilots more and more inattentive, putting on makeup, plugging on phones and even falling asleep. Google executives have rightly understood that partial automation created a thorny problem of human-machine interaction that was almost more difficult to manage than the level 4 autonomous drive technology itself. Without an incredible amount of work on driver monitoring, the limitations of the operational design domain and other HMI tasks, AutoPilot was an irresponsible product to impose on the public … and that did not even offer the main benefits of # 39; autonomy.

It's hard to imagine that Musk heard about AutoPilot in the first quarter of 2013 without knowing why Google abandoned the product. However, if he became aware of these risks, he has been fooling since. It has, however, taken up the challenges of Google's new directions, informing the media of the "incredible" challenges presented by "the last percent" of kilometers traveled and that Google's lidar technology was "too expensive". Since then, Musk has consistently made lidar an unmissable position that draws public attention to the challenge of level 4 autonomy and away from the major issues associated with the approach of the Tesla autopilot.

Since 2013, Waymo has gradually and quietly progressed iteratively on its Level 4 technology without breaking into the consumer mass market. Tesla, for its part, has garnered billions of dollars of evaluation in the market and has established itself as a mainstream consumer brand thanks to the power of an autopilot system involved in many accidents and deaths. The very scenario that Google's executives feared, a fatal crash involving an unprepared user of AutoPilot, has now arrived several times … and yet, rather than destroying the trust in technology in the broad sense, it is n & # 39; It did not even interfere with Tesla's perceived position as driving.

On the one hand, this seems to be a validation of Musk's notoriously ruthless and risk-tolerant entrepreneurial approach (at PayPal, he has already given credit cards to anyone who so wishes). On the other hand, Musk's decision to ignore or dismiss Google's concerns, despite unprecedented research and unprecedented domain knowledge, casts a troubling glance over the deaths of subsequent autopilots. After all, Tesla's own engineers shared these concerns and asked Musk to adopt a driver monitoring system, which Musk dismissed because of its cost or inability to operate the technology.

At some point, it becomes impossible to deny that Musk could have predicted the deaths of Gao Yaning, Josh Brown, Walter Huang, Jeremy Banner and possibly other people (not to mention the countless non-fatal collisions autopilot). We are forced to conclude that he risked these accidents because the benefits outweighed them and, without a doubt, the hype, the securities and the value of the shares that resulted for Tesla and Musk were worth billions. The public is outraged by the possibility that automakers make recall decisions by weighing in the cost of a few cents per piece compared to the inevitability of a number of human deaths, a trope made popular in Fight Club scandals like the Ford Pinto, GM ignition switches and defective Takata airbags, yet Musk's calculative calculation has not yet been adopted as a public morality.

This is another example, alongside that of Anthony Levandowski, of a certain amoral and rewarding attitude that is surprisingly well tolerated in Silicon Valley. Waymo is continually ridiculed or criticized for its inability to widely deploy its own Level 4 robotaxis in a viable business, but criticize Tesla's decision to deploy the autopilot without the guarantees that Google's tests have proven that It was necessary to be sure, resulting in several deaths ridiculed himself as the domain of anti-Tesla "hate" and kooks. We can certainly now see, while the NTSB accumulates case after "foreseeable abuse case" of the autopilot, that rewarding Musk's willingness to sacrifice human lives for its development and enrichment is to create a set of 39 incentives that lead directly to dystopia.

Of course, there are reasons why Musk's amoral gambit has not been seen for what it is. Despite the years of academic research that underpin Google's research, the human nature of autopilot (and autopilot) makes it possible to blame humans, even if all these searches these systems will always lead them to inattention (especially if there are one or two easily discredited studies of major institutions showing the opposite). Even the US safety authority, NHTSA, is not equipped to establish anything like a "predictable abuse" (which is very different from the types of flaws it is used to), forcing the NTSB to muster evidence before acting. Even Tesla's opaque data management system makes it difficult for Tesla owners, their loved ones, the media and regulators to establish that the problems identified by Google and countless academic researchers are actually in the process of kill people.

Because many participants in the public debate about Tesla's autopilot-related safety issues have a financial interest in the company's stock or just like to use the system (or even just like other aspects of the Tesla brand), there will always be someone who will defend Tesla. . But the most important discussion is not limited to Tesla itself: if a big builder determines that one particular system is not safe and another deploys it anyway, is it? what would anyone call this brave innovator enterprise, even as people die as a result of their decision? And if they were aircraft manufacturers?

Whatever we think specifically about Elon Musk or Waymo or a person, company or sector, what they do and how they are received creates incentives with which we must live. The decision slide of the Musk autopilot sets a deeply troubling precedent that will in turn justify someone else's decision to put your life at risk for the benefit of its greater glory. Ignoring the facts promulgated by academic researchers, Waymo and the NTSB in turn contribute to the erosion of evidence-based and scientifically based discourse.

Even if you think that the deceased drivers of Tesla have made a conscious choice (and Tesla certainly has not disclosed research showing that a "predictable abuse" of "Level 2+" systems is almost inevitable) , cars and drivers endanger many people. on the road that does not have. Elon Musk, for his part, deliberately chose to deploy a system that he knew to be life-threatening or life-threatening and had not disabled or removed him from the market, even after people had begun to die.

Beside that, it's time to celebrate the slow (sometimes seemingly atrocious!) Walk from Waymo to truly driverless technology. They may not meet the toxic expectations of Silicon Valley's fashionable culture, but they respect the most basic standards of human society. If that means we have to wait a little longer to feel like we are living in an epic future, so be it. At least when this future comes, it will be more utopian than dystopian.

[ad_2]

Source link