If you listened to Waymo CEO John Krafcik's comments at the Frankfurt Auto Show, you may have caught a subtle shadow on Tesla and other big names in the automated driving area. When you highlight the depth of experience that the Alphabet Company has provided and show why the goal is still to cope with the challenges of Level 4 autonomy, it can be easy to feel like you've heard comments like Krafciks before. But with the benefit of historical context, some of which I have drawn from the research that went into my book LUDICROUS: The Unvarnished Story of Tesla Motors, we can glean some important lessons from Krafcik's speech.
At that time, no one knew how close Tesla was to becoming an alphabet company himself. It wasn't until Ashlee Vance's fairly approved bio of Elon Musk came out in 201
Part of Musk's promotional flash, starting in the second quarter of 2013, involved talking about automated driving for the first time. Musk started by saying that Tesla might be able to use Google's technology to make his cars driverless, but during the second half of the year he spoke up Tesla's own system freely from the search giant's efforts, inspiring headlines that had his company "[moving] ahead of Google." To achieve this, Musk said Tesla's system would offer automated driving for "90% of the miles traveled in three years," and said full autonomy was "a bridge too far."
With the benefit of hindsight, it is clear that Musk was – at least – either inspired or scared in this direction by looking behind the curtain at Google's surprisingly advanced autonomous technology. But based on the latest information from Krafcik, Musk seems to have been more than just inspired: Google had extensively tested a highway-only "driver in the loop" system before that point called "AutoPilot." According to Google / Waymo consultant Larry Burn's book Autonomy, "AutoPilot" [Burns doesn’t use this name] was developed through 2011, tested it in 2012 and decided by the end of the year that it would not pursue the product.
In short, Musk must have looked at (or possibly even demonstrated) AutoPilot, and decided that if Google wouldn't take it to market, he would, right down to their internal name of the product. While not everyone would make the same decision with regard to a friend's product, especially after the company offered an attractive bail offer for their own company, it is not difficult to understand why Musk did what he did. In trend-obsessed Silicon Valley, automated driving was about to turn Tesla's electric vehicle technology into old news, and here was a completely scoped and demonstrated product that could get Tesla back in the game and otherwise would become "abandonware."  The problem, of course, was simply that Google left AutoPilot for good reasons. The video of test "drivers" using AutoPilot, which Krafcik showed publicly for the first time in Frankfurt, shows drivers who become deeply inattentive, put on makeup, connect phones and even fall asleep. Leaders of Google's self-driving operation rightly realized that partial automation created a thorny human-machine interaction problem that was, in a way, almost more difficult to fully control than Level 4 self-driving technology. Without unbelievable amount of work on driver monitoring, operational design limits, and other HMI work, AutoPilot was an indefensible product to support the public … and one that did not even provide the main benefits of autonomy.
It's hard to imagine that Musk learned about AutoPilot in the first quarter of 2013 without learning Google's reasons for leaving the product, but if he learned about these risks, he has been mute about them ever since. However, he played down the challenges of Google's new directions, telling the media about the "incredible" challenges presented by "the last few percent" of miles traveled and that Google's lidar technology was "too expensive." Ever since, Musk has regularly made a whipping post that focuses public attention on the challenge of Level 4 autonomy and away from the main issues with the Tesla Autopilot approach.
In the years since 2013, Waymo has quietly and confidently made steady iterative progress on Level 4 technology without breaking into the consumer mass market. Tesla, on the other hand, has received billions in market valuation and established itself as a household consumer brand on the strength of an Autopilot system that has now been implicated in many crashes and deaths. The very scenario that Google's management feared, a fatal crash involving an inattentive AutoPilot user, has now happened several times … and yet, rather than destroying trust in the broader technology, it has somehow not even harmed Tesla's perceived position as leader in automated driving.
On the one hand, this seems a validation of Musk's notoriously reckless and risk-tolerant approach to entrepreneurship (at PayPal he once gave away credit cards to basically anyone who wanted one). On the other hand, Musk's decision to either ignore or reject Google's concerns, despite outstanding research and professional knowledge, subsequent Autopilot deaths under precisely the circumstances Google worried about in a disturbing light. After all, Tesla's own engineers shared these concerns and pushed Musk to use driver surveillance, something Musk dismissed due to either costs or the inability to make the technology work.
At one point in time, it becomes impossible to deny that Musk could have foreseen the deaths of Gao Yaning, Josh Brown, Walter Huang, Jeremy Banner and possibly others (not to mention the countless non-fatal Autopilot crashes). One is forced to conclude that he risked these crashes because the benefits outweighed them, and no doubt the subsequent hype, headlines and share value that accrued to Tesla and Musk were worth billions. The public is furious at the possibility that automakers make recall decisions by weighing the cost of a few cents per share against the inevitability of a certain number of human deaths, a trope that became popular in Fight Club and proven in scandals. such as Ford Pinto, GM ignition switch and Takata collision impeller, and yet Musk's cold-blooded calculation has not yet become a public moral narrative.
This is another example, along with Anthony Levandowski, of a certain morality and self-enriching attitude that is astonishingly well tolerated in Silicon Valley. Waymo is constantly ridiculed or stung by its inability to now deploy its own Level 4 robotics in a viable business, but criticizes Tesla's decision to deploy Autopilot without security measures as Google tests proved it needed to be safe and resulted in more deaths even spotted as the domain of anti-Tesla "haters" and coke. Certainly, we can now see, when the NTSB stacks up the case following the case of "predictable abuse" by Autopilot, that to reward Musk's willingness to sacrifice human life for its own aggrandizement and enrichment is to create a set of incentives that lead directly to dystopia.  Of course, there are reasons why Musk's amoral gambit has not been seen for what it is. Despite the years of academic research that support Google's research, Autopilot (and AutoPilot) makes human in-the-loop nature possible to blame on humans, though all this research will always bring these systems to the attention (especially if there is one or two slightly discredited studies from large institutions that show the opposite). Even the US security regulator, NHTSA, is not equipped to establish anything like "predictable abuse" (which is very different from the type of bugs it is used to hunting) that requires NTSB to build up evidence before acting. Even Tesla's opaque data management system makes it more difficult for Tesla owners, their loved ones, the media and regulators to establish that the problems identified by Google and countless academic researchers are really killing people.
Because so many participants in the audience "debate" about security issues with Teslas Autopilot have a financial interest in the company's shares or even just enjoy using the system (or even just like other aspects of the Tesla brand), There is always someone defending Tesla. But the more important discussion here goes beyond Tesla itself: if one major car manufacturer decided that one particular system is not safe and another distributed it anyway, would anyone then call the latter company a brave innovator even if people died because of their decision? What if they were aircraft manufacturers?
Whatever one might think specifically about Elon Musk or Waymo or any individual, business or sector, what they do and how it is received creates incentives that the rest of us must live with. Letting Musk's decision to autopilot create a deeply worrying precedent that will in turn justify another's decision to put your life at risk in favor of their greater honor. When ignoring facts announced by academic researchers, Waymo and the NTSB, for their part, contribute to the erosion of fact-based and science-based discourse.
Although you believe that the dead Tesla drivers made a conscious choice (and Tesla certainly did not disclose the research showing that "predictable abuse" of "Level 2+" systems is anything but inevitable), cars and drivers make up a lot of people on the road who didn't. Elon Musk, on the other hand, made a conscious choice to distribute a system he knew had problems with life or death, and did not deactivate it or withdraw it from the market even after people began to die.
Next to Waymo's slow (sometimes seemingly unbearable way!) March toward truly driverless technology is celebrated. They may not live up to the toxic expectations of Silicon Valley hype culture, but they live up to the most basic common norms in human society. If that means we have to wait a little longer to feel like we're living in an epic future, so be it. At least when that future comes, it will have a shot at being more utopian than dystopian.