Tesla just bought an AI startup to improve Autopilot – that's what it does

  DeepScale CEO Forrest Iandola, shown here in a photo from 2013, is Tesla's latest machine learning guru.
Enlarge / DeepScale CEO Forrest Iandola, shown here in a photo from 2013, is Tesla's latest machine learning guru.

Tesla has acquired machine learning startup DeepScale, CNBC, Techcrunch and other news sites have reported. The company's CEO Forrest Iandola, announced on Monday that he had joined the Tesla Autopilot team.

Iandola explained the company's mission to Ars during a telephone conversation, right after the company raised $ 15 million from venture capitalists in April 2018. DeepScale built image recognition software based on intricate neural networks.

A key step for any self-driving software system is the perception: identifying cars, pedestrians, bicycles and other objects around the car. Identifying objects accurately is crucial because it allows software to make informed predictions about where they might move in the future. Most companies dealing with the problem use a technique called "convolutional neural networks" (CNN) to tackle this problem. You can check out our deep dives on CNN for complete information on how they work.

DeepScale focuses on improving the speed and efficiency of intricate neural networks, drawing on Iandola's previous work as an educated computer science student. The company's techniques will be especially useful for Tesla. Tesla relies on machine learning techniques to achieve full self-driving capabilities without the use of the lidar sensors or HD maps by most of Tesla's competitors.

Making neural networks a lot smaller

A famous 2012 paper known as AlexNet, after lead author Alex Krizhevsky, first demonstrated the power of neural image recognition networks. AlexNet authors figured out how to leverage the parallel computing power of GPU cards to train much larger intricate neural networks than had been used before. This enabled them to deliver far better performance on a standard image recognition task than any previous algorithm.

However, one notable disadvantage of the AlexNet algorithm was that it was huge, with 60 million trainable parameters. Prior to founding DeepScale, Iandola was a doctoral candidate at the University of California, Berkeley, where he developed techniques to reduce the size of neural networks such as AlexNet.

Using a number of optimizations, Iandola and his co-authors demonstrated that they could achieve AlexNet-like performance while reducing the number of parameters by a factor of 50. It reduced the physical size of a trained AlexNet network from 240 MB to less than 5 MB. Using additional compression techniques developed by other researchers, including switching from 32-bit to 8-bit parameters, they managed to reduce the size of the model by another factor of 10 – and produced intricate neural networks with AlexNet-like performance that was less than one half a megabyte.

In his 2018 interview with Ars, Iandola argued that this type of optimization is important for companies trying to bring image recognition technology to the market. Companies like Tesla regularly push out new versions of neural networks to customer vehicles, which often have limited bandwidth. Pushing out half a megabyte of data is much easier than 240 megabytes.

Smaller models will become especially significant as companies begin to build custom silicon for machine learning applications. Iandola pointed out this advantage in a 2016 paper: "When distributing CNNs on application-specific integrated circuits (ASICs), a sufficiently small model can be stored directly on the chip, and smaller models can allow ASICs to fit on a smaller form. " This has obvious cost advantages, and it can also improve performance, since the chip does not need to retrieve model parameters from external memory.

DeepScale sought to commercialize Iandola's research

When Iandola wrapped up his research at Berkeley around 2015, he was looking for a way to commercialize the technology. He quickly realized that the self-propelled car boom was an opportunity to apply his research to a practical problem.

"Research was particularly focused on creating some of the most efficient neural networks – energy efficiency, running very fast," Iandola told Ars in 2018. "The autonomous driving market just took off, and we found a great opportunity there."

"What our solutions do is identify things along the way," Iandola said. "We can tell you what kind of objects we see and how far away they are. In object recognition, we have had an order of magnitude improvement in the error rate."

Iandola noted that market leader Waymo had impressive technology, but "there is a lot of custom hardware there that is expensive." DeepScale's role, he said, "is less about having the first opportunity than getting things down to a cost and reliability point where it can be mass produced."

"We do not build any hardware," he added. Instead, the company uses "commodity processors and sensors." He added: "Our superpower cuts the calculation by a factor of 100."

DeepScale seems like a good fit for Tesla

Three years ago, Elon Musk promised that customers would be able to achieve full self-drive with the hardware the company sent at that time. Earlier this year, Tesla tacitly admitted that it did not agree to the rollout of a new custom chip for machine learning applications.

Tesla is still under a lot of pressure to achieve outstanding machine learning performance with a limited computational budget. This is a particularly difficult problem because Tesla tries to do so without lidar sensors or HD maps – two resources that most other self-driving car companies consider crucial to getting the technology to work properly.

Inside, Tesla's large fleet provides the company with a huge amount of data to use to train neural networks. With hundreds of thousands of vehicles on the roads and the ability to query the fleet for "interesting" incidents, Tesla's engineers can pull in billions of miles of real data to help train neural networks running Autopilot.

Tesla also has a constant need to fill in its Autopilot talent pool because the company has had a steady exodus of top talent for the past three years. For more than three years, Elon Musk has claimed that full autonomy is less than two years away. In 2015, he declared that full autonomy was "a much easier problem than people think it is."

That attitude created friction with engineers on the Autopilot team who saw Musk's aggressive schedules as unrealistic. Tesla Autopilot chief Sterling Anderson quit in late 2016, shortly after Musk promised that the new hardware would be capable of full autonomy. Two more Autopilot bosses have left the company since then, along with many engineers.

Source link

Back to top button