Caution: Years of Work Ahead

Within two years, Tesla’s self-driving technology will be reliable enough to let “drivers” nap on their way to work, or so Elon Musk claimed in a recent TED talk. Musk may have a reputation for surpassing the possible, but even so, this is one goal that seems unlikely to be met. But not for the reasons one might think.

At PingWest’s recent SYNC 2017 conference in Silicon Valley, Amy Gu (a partner at Hemi Ventures), Jason Hartung (CEO at self-driving development company PolySync), David Liu (CEO at PlusAI), and Peter Pyun (solutions architect at NVIDIA), came together for an incisive discussion. What emerged was a view of a technology that is not quite like any other on the horizon, and one where the greatest barriers to its success are something more than an engineering challenge.

An Industry That Isn’t

Self-driving technology faces the challenge of integrating with the “hardware” of the existing car industry, rather than trying to leap over or bypass it. An ever growing number of software professionals have thus been pouring into the field, so much so that the center of the auto industry seems to be shifting from hardware to software. Even just ten years ago, the idea that the likes of Google, Apple, and other tech companies might have a role to play in the car industry would have seemed bizarre; now, they may be indispensable to its future. According to Gu, whose company Hemi Ventures has already placed a number of bets on self-driving companies, hardware and software companies are proving eager to establish partnerships, while self-driving software startups are being bought up on all sides, and OEM manufacturers are under pressure to adopt the newest technology.

And what are those self-driving systems to look like? Pyun believes the key is end-to-end deep learning, working through “data collection, training, deduction, feedback, and calibration” with iterative algorithms. This is something that, Pyun says, NVIDIA has already begun doing with more than 500 partners, including Toyota, Audi, and Tesla. Still, the engineering challenges are enormous. As Hartung explained, in the traditional car industry, a single company can produce a quality car. Thus far, however, the self-driving tech field has required an “army” to address software problems, and for traditional manufacturers outsourcing has been the natural recourse.

But as Hartung pointed out, there is something strange in talking about the self-driving “industry,” because it doesn’t exist. Thus far, there isn’t really much of anything for consumers to buy. Instead, “it’s a really big, investment-backed science project.” In a way, each self-driving tech startup is its own experiment, testing different algorithms, different strategies, different configurations of sensors and hardware. As Liu noted, self-driving tech projects can expect to spend anywhere from 50 to 100 million USD developing and testing their systems in a process that is likely to require five years of work or more. And while Google, predictably, seems to hold the advantage for the time being, no one really knows yet which among all of these experiments will finally succeed.

A Psychological Finish Line

Ultimately, however, the dividing line that separates self-driving tech as a large-scale “science project” from practical application isn’t one of engineering or software sophistication. It’s safety.

In a strict sense, self-driving cars are already here. Google, Baidu, and of any of dozens of startups have demonstrated driving systems that allow cars to navigate and steer themselves on most highways and streets at least to some degree. The question is whether they can do so reliably enough to avoid hurting anyone. The target that most startups now aim for is called “level 4” autonomous driving in the SAE International’s standards: a high, if still not complete, level of driving autonomy that generally does not require any input from the human driver. Yet even that, everyone agreed, was still at least some years away. Hartung observed that self-driving cars are “a new compute paradigm,” in which different measures of success must apply. “They’re as connected as cellphones, processing supercomputer amounts of sensor data, but they need to be as safety critical as the space shuttle.”

When riding in what is essentially a cage of steel and glass hurtling down a highway at speed, there are questions of safety that never even arise with things such as smartphones (explosive batteries notwithstanding). As Pyun noted: “You don’t want to ride in an autonomous car with your family that’s 99.99% safe.” Even the miniscule remaining error in that case is too large when it carries the risk of dying.

Yet getting the risk down to zero is not realistic, either. No matter how good self-driving systems may become, it is too much to expect that they would be able to prevent or avoid all dangers on the road. Gu thus believes that safety is relative, not absolute. She pointed to Siri and Alexa for comparison. Humans are surprisingly poor at speech recognition, with error rates of around 5.9%. Some of the more advanced speech recognition AIs, however, can now do better. Gu suggests applying a similar standard to self-driving cars, arguing that if self-driving cars can achieve an accident rate that is at least lower than that of human drivers, then they may be considered safe.

But to be pragmatic, Hartung suggests that the question of safety has to connect with one of trust. If a self-driving car is in an accident, consumers are not actually likely to view it is as “just” an accident—an isolated and circumstantial event—but as an indication of a flaw in either the car model or even in self-driving technology itself. Safety therefore takes on an importance beyond the obvious, and becomes a matter of keeping public trust in the technology.

And that, perhaps, speaks to why self-driving tech startups face years of testing and refining their systems. The first level 4 driving system may already be out there, and Musk could be right: perhaps Tesla’s self-driving system will reach level 4, or even level 5 autonomy within the next few years. But only time, and rigorous testing, will prove it. Because a car that crashes 0.01% of the time will likely seem safe enough on the first drive, or even the thousandth. But it will still be too dangerous trust on the road.