[ad_1]
Elon Musk does not mince his words when he rejects technologies that other companies rely on to control their driverless cars.
"The two main crutches that should not be used – and, in retrospect, will obviously be fake and stupid – are the Lidar and HD cards. Mark my words, "said Tesla's CEO recently.
Go against conventional wisdom and give up things that most of its rivals regard as risky essential sounds. But Mr. Musk has never been one to follow the flock, nor to under-promise.
The electric car maker claims that its new cars already have enough sensors and computers to drive themselves and that it will send a live software update before the end of the year. to complete the chart (although it may take some time the insurance companies and regulators are willing to allow the use of cars in fully autonomous mode.)
The "crutches" against which Mr Musk complains involve two of the most common ways for autonomous vehicles to understand the world around them.
Lidar sensors, which use lasers to emit pulses of light and measure the time needed to return a reflection, are currently one of the best ways to measure the shape and distance of others. objects. But they are expensive, the current mechanical models costing several thousand dollars.
Reducing their operation to a silicon chip could help. The world has become accustomed to the cost of silicon components, like this one, which are shrinking rapidly over time. But cost reduction depends on high volume production.
Thus, the prices of smartphone components fell sharply, reflecting the scale of the market: 1.5 billion handsets sold in 2018. In contrast, about 82 million cars were sold worldwide last year and probably several years ago. a significant fraction of new vehicles are equipped to operate without a driver.
High Definition cards, on the other hand, are used to help driverless cars understand their environment, reducing the amount of raw data they need to collect and process. It is therefore necessary to "geolocate" them, only allowing them to travel in very precisely mapped areas.
The problem with this, according to Mr. Musk, is that when the real world changes in a way that is not reflected on the map, the system may fail and if you can not rely on the map with 100% accuracy, it loses its value.
Instead of such techniques, Tesla's autonomous driving technology relies almost entirely on teaching cars to "see" with a range of cameras. As a relief, his cars also use a forward-looking radar, as well as a dozen ultrasonic sensors around the vehicle to help detect nearby objects.
The improvement of computer image recognition is already among the most important recent advances in artificial intelligence. However, as in the case of artificial intelligence, tasks that seem falsely easy for humans can lift even the best computers.
Tesla has developed a combination of hardware and software to handle this task. Two weeks ago, he revealed a computer chip designed by the company to process the huge amount of image data needed to allow his cars to interpret their surroundings. Even Nvidia, whose chips are widely used in artificial intelligence systems of other driverless cars, has awarded the company a "raise of the bar" for the sector.
To give meaning to all the data it processes, Tesla relies on an artificial intelligence technique called deep learning. This uses artificial neural networks, systems that were originally modeled on the visual cortex of animals.
It is there that things get complicated. Neural networks require large amounts of data to lead: it is only when they have been fed with different examples of the same thing, each one carefully labeled by humans, that is to say, that they are not the same. they can learn enough to identify the same object in the real world.
For this, Tesla expects to have a real dataset superior to that of any other company. He can rely on the images of about 400,000 cars already on the road (he hopes to reach 1 m by the middle of next year). With software called "shadow mode" running in the background, he can track the behavior of human-driven cars, using them to feed his learning systems.
But even that may not be enough. Neural networks are notoriously "fragile": they can fail unexpectedly when, for example, the image of an object does not correspond to any of the variations presented before. And they are always at risk of being in a real situation for which they have not been trained.
Despite this, Tesla takes a purist and provocative approach. Give the network enough real data about all the situations it might face, and the machine can match and surpbad human factors.
Take driving in the snow, a skill notoriously difficult to master for driverless cars. Humans are surprisingly good at anticipating lane markings on snow-covered roads, says Andrej Karpathy, Tesla's Computer Vision Manager. Integrate a sufficient number of human-labeled snow-covered road images into an AI system, and the computer may be able to interpret a similar scene, he said.
Other people in the AI world are wondering if it's as simple as it sounds. According to an expert who recently closely examined Tesla's AI, the technology is still far from being able to fulfill Mr. Musk's promises for the end of the year, although the latter adds that Tesla might well reach its goal. in the next two to three years.
To judge frequent delays with other Tesla products, Mr. Musk often considers deadlines as moving engagements. But if it achieves full autonomy before others use a different technology, the potential delays will not matter.
Source link