Self-driving vehicles require relentless scrutiny. However, some fundamental misunderstandings cause industry observers to make broad definitive statements with very little to back up their conclusions. To respond each time would be one long game of Whack-A-Mole and there’s only so much time in the day.
But I’m motivated to push back after seeing the highly influential Washington Post devote a recent front page of their Outlook Section to an article titled “Companies Are Still Racing To Make Self-Driving Cars: But Why,” with the subtitle “They may not be safer than human drivers. And they’ll make gridlock worse.” The piece was written by David Zipper, who has extensive experience in Smart Cities and Mobility; his writings examine the interplay between urban policy and new mobility technologies.
Hey That’s Cool!
Mr. Zipper stated that “… autonomous vehicles were born not from public need but from technological opportunity,” quoting Carnegie-Mellon University professor Dr. Phil Koopman as saying that early work in automated driving in the 1980’s happened because the grad students thought it was “cool.” Intensive tech development does not happen on university campuses just for fun, it relies on funding from entities who have a need. In the 1980’s, it was the U.S. Army seeking to develop robot scout vehicles that could monitor dangerous areas without putting soldiers at risk. During the 1990’s, Congress passed the ISTEA Highway Bill, which specifically directed USDOT to “develop and demonstrate an ‘Automated Highway System’ by 1997.” USDOT ramped up a major program working with the National Automated Highway Systems Consortium led by General Motors. The major focus of this program, which I led, was to improve highway safety and alleviate traffic congestion. The program culminated with Demo ’97 in which thousands of people experienced automated driving on I-15 in San Diego. Universities involved in this initiative, including CMU, were the fountainhead for teams in the DARPA Challenges and later, Google and other automated driving startups.
Artificial Intelligence Cries Out for Intelligent Discourse
Mr. Zipper’s assertions on Artificial Intelligence were concerning. There’s a fundamental mis-understanding in the blogosphere about AI which he appears to hold. AI is not “one thing.” Each AI engine is unique to the developers and their use case. No one would point to dysfunctional software and conclude all software is bad, but the media repeatedly does this with AI. Yikes!
I am therefore highly skeptical of any statement about what “AI” does or doesn’t do. When Mr. Zipper states that “limitations of machine learning will inevitably lead to mistakes that human drivers wouldn’t make,” the key evidence provided comes from an AI system in development several years ago that “struggled to identify the color yellow,” thus failing to read the road scene correctly. He seems to further support this point by citing the tragic Uber fatality in Arizona in 2018, which involved a self-driving car undergoing testing for which the human safety driver was responsible for safety.
The performance of developmental systems such as this, from years ago, tells us nothing about the capability of today’s automated driving systems. Most are still pre-commercial technology, in various stages of development and testing. Exceptional progress has been and is being made at a rapid pace.
Driverless Is Happening
Some companies have crossed the threshold to offering commercial services. Alphabet’s Waymo and GM’s Cruise are now offering commercial ride hailing services with their driverless vehicles. Anyone can download the app and take a ride. Ask these companies if their AI system can “see the color yellow” or more to the point, whether it can safely and efficiently negotiate traffic.
The task of thousands of engineers and programmers working at companies developing self-driving cars is to get it right, and they are applying highly sophisticated and broadly accepted safety validation practices in taking this major step to offer services to the public. According to Automotive News, Waymo has driven “hundreds of thousands” of miles with paying customers and no human safety drivers aboard during operations in Chandler, Arizona.
Mr. Zipper’s comments were directed at the passenger car side. In addition, it is useful to highlight driverless operations of trucking players Gatik and TuSimple. Working with their freight customers, who have a lot to lose if a branded truck has a mishap, they too have been through the gauntlet of safety validation sufficient to vacate the driver’s seat.
For these and many other companies, the evaluation process continues well after driverless operations commence. This is natural from an engineering perspective and the responsible path forward for the fleet operators.
Robo-Taxis Might Actually Improve Traffic
In terms of non-safety factors, such as reducing CO2 emissions, Mr. Zipper conflated automated vehicles with today’s ride hailing. What could happen (or not) with robo-taxis is already happening (or not) with ride hailing, such as individuals sharing a ride versus preferring to travel alone. That is a weak rationale for his assertions that self-driving cars will “make gridlock worse.” In fact, given the high technical sophistication of robo-taxis, they could employ active cooperation between one another to improve traffic flow based on established inter-vehicle communications protocols and accessing data in the cloud.
Oh, The Exemptions!
Many pundits and politicians have opted for fear-mongering when it comes to Exemptions. In yet another version of this, Mr. Zipper noted “Last year the Senate considered a measure which would have allowed the carmakers to request that up to 80,000 self-driving cars per year be exempt from established automotive safety rules.” This could result in the average citizen freaking out – “self-driving cars will totally ignore safety!” His statement is technically accurate, but entirely without critical context: safety regulations require that the non-compliant approach be as safe or safer than what the regulation requires, and any requests must be approved by National Highway Traffic Safety Administration. Furthermore, this exemption process is nothing new. Under current law, companies can seek an exemption from motor vehicle safety standards for up to 2,500 vehicles for up to two years that do not meet existing federal rules, once again as long as the requester can establish this will be as safe or safer than what the regulation requires. The proposal in Congress was aimed at raising the cap so that driverless commercial services could scale up.
Exemptions or not, since last summer NHTSA has required notification of any crashes involving advanced driver assistance systems and self-driving vehicles. This allows a more solid basis for safety assessment.
Count On Me To Be Boring
We are in a learning and evaluation phase involving responsible actors. Policy-makers and the broader community should hold them to a very high standard. Let the debate continue as to what is beneficial or harmful about automated driving but it is just not possible to make definitive and sweeping statements at this early point in commercial deployment.
But, OK, I get it. I understand that to get published in top-tier news outlets, a writer has to crystalize a vast array of factors into conclusive statements that are crisp and maybe jarring. Provocations win if you want to get published. I’ve had more than one mainstream journalist end a call quickly when they get the “maybe this maybe that, it depends” Bishop response.
Regardless, anyone who aims to say what “will” happen as automated vehicle deployment scales up is jumping the gun.
Source: https://www.forbes.com/sites/richardbishop1/2022/02/21/self-driving-cars-an-epidemic-of-questionable-assertions/