People often ask, “Where’s my self-driving car?” “Why don’t I have one and when will it come?” A lot of people feel they were promised a car by the late 20-teens and it’s late, and perhaps isn’t coming, like the flying cars talked about decades ago.
In this two-article series (with accompanying videos) let’s look at the core reasons you probably aren’t riding in a robocar today, and when it might happen. What are the core technological, legal and social issues standing in the way, and, and what issues actually aren’t blockers?
For most of us, these cars can’t get here soon enough. They have the promise of avoiding a decent fraction of today’s car accidents which kill over a million each year around the world. They will make our lives easier and rewrite the principles of transportation. In doing that, they will rewrite where we live and the very nature of the city, as well as dozens of other industries from energy to retailing. Every day we delay getting these things out on the road in volume, thousands will die at the hands of people who shouldn’t have been driving. Every day we delay.
Of course, it is hard
To be clear, the biggest reason that “it’s taking so long” is that it’s hard. One of the grandest software research projects ever undertaken. It has required not just breakthrough software but also tons of detailed work down in the weeds dealing with vast numbers of special cases and mapping the world and all its wrinkles. Anybody who thought or thinks it can be delivered on a schedule is wrong, and never worked in software before. When car companies threw out dates like 2020, those were hopes, not predictions, and that some tech companies actually pulled that off was amazing. Multi-year projects requiring breakthroughs are never predicted accurately.
Nobody with a software background would be at all shocked if predictions for such a grand project made many years ago aren’t accurate. So things are not “behind schedule,” even if they did not meet optimistic hopes. This also means things are being done in smaller steps.
The biggest blocker though is not actually doing it (ie. making it safe) but knowing that you’ve done it.
Proving that you’ve really made it safe
The first technological goal was to just make it happen. To make a car that can drive itself safely. That’s a massive achievement, but at least in a few cities, a few companies have already pulled that off. Driving more safely than the average human has been done by companies like Waymo on the easy streets of Phoenix. That was “the hard part” – but an even harder part is defining what safety is, measuring it, and proving you’ve done it. You need to prove it to yourself, to your board, to your lawyers, to the public, and maybe even the government. Just as the Moderna Covid vaccine was ready in February 2020, before the first lockdown, the world waited 10 months – while a million people died without it – before letting the first people get a shot. We waited for them to prove they had done it.
Measuring safety is pretty hard. We know how often human drivers have crashes of all types, from minor dings up to fatalities. Fatalities happen about every 80 million miles in the USA, or about 2 million hours of driving. We can’t test every software version by saying, “Let’s have it drive a billion miles and see if it kills fewer than the dozen people that would die if humans drove that far.” It’s an impossible distance to drive on real roads even once, let alone with every new version. We might drive much less, and count dings and minor crashes – in fact this is the best we’ve come up with so far because it’s at least possible — but we’re not sure if that relates to injuries with robots the same way it does with people.
Many start the traditional auto industry way. They test each component of their vehicles to make sure it’s reliable and up to specification. They try to do that with systems of components, but that methodology becomes difficult when things get more complex. This is called functional safety – are the components and systems free of defects and will they handle known potential failures.
More recently there’s been more effort to bump this up to a systems level and try to test the “Safety of the Intended Functionality.” With SOTIF, teams work to assure whole systems will still function, both with problems and component failures, and with anticipated misuse. This often involves simulation of the whole system, or parts of it, or “hardware in the loop” simulation that is easier and safer than live testing on the roads.
Simulation testing offers the ability to test a system in millions of different scenarios. Anything anybody has ever seen or heard or or dreamt of – with hundreds of slight variations of all those things.
Perhaps the hardest thing to test, but the thing you most want to know, is how well a system responds to never-before-seen situations. While you can create simulation testing to know the vehicle does well in almost all expected situations, a great magic ability of human minds is the capacity to handle never-before-seen problems. AIs can do this, but they’re not quite as good. Eventually, we would hope for a way to get new, realistic, dangerous scenarios every day. It’s good today your car has been programmed to handle everything anybody’s ever thought of, but the real gold standard may be to throw 20 new situations it’s never seen before, every day, and find out it handles most of them. Even humans don’t handle all of them. That’s one thing I hope to see happen through the Safety Pool project, which I helped initiate with the World Economic Forum, Deepen.AI and the University of Warwick.
Even with all the simulation you also need to test live on the road. Nobody is going to deploy a car that hasn’t shown that it handles the real world very well. While expensive, the system of using human safety drivers to oversee robocar operations actually has a superb track record, and does not endanger the public compared to ordinary human driving.
In the industry, every company falls over itself to describe how devoted they are to safety. It is their job to make a safe vehicle, but they make these declarations to please officials and the public. Ironically, the public interest is not to make the safest robocars, but rather the safest roads. Robocars are a tool that can bring safer roads, and the sooner they get here the sooner and better they will do that. Officials, if they took their duty towards improving overall road safety seriously, would be actually encouraging companies to not go too far on safety, and instead focus on the quickest deployment of safer technology – even if doing less to prove it’s safe when deployment is small, makes it happen faster. But they never will, because of the way society reacts to errors and risk.
A second component of safety is cybersecurity. We do need these cars to be robust against attempts to take them over. Some people don’t like to talk about cybersecurity, but the past history of the auto industry has not been great. Doing this involves not just secure practices and tools, but also what’s called “red teaming,” where a team of expert white-hat hackers hunts from the outside to find vulnerabilities until they can’t find any more. One other important tool is minimizing connectivity, or what security people call “attack surfaces.” Many in the industry are obsessed with what they imagine is the “connected car” and mistake the connectivity for as big a revolution as the self-driving. It isn’t, not remotely. Some connectivity is needed, but it should be used sparingly so the real revolution can stay secure.
One of the biggest challenges for testing is the wide use of machine learning by all robocar teams. Machine learning is a hugely powerful AI tool, and most feel it’s an essential one, but it tends to produce “black box” tools which make decisions but which nobody fully understands. If you don’t know how a system is working or why it fails or does the right thing, it’s hard to test and certify it. In Europe, they have been making laws demanding that all AI be “explainable” at some level, but many machine learning networks are very hard to explain. That’s scary, but they are so powerful that we won’t give them up. We may be faced with a black box that’s twice as safe in testing as an explainable system, and there are compelling arguments people make in favor of either choice.
Predicting the Future
A robocar is covered with sensors, such as cameras, radars, LIDAR lasers and more. Sensors are probably the most discussed aspect of the hardware, but in fact sensors don’t tell you what you want to know at all. That’s because sensors tell you where things are right now, but you don’t care so much about that. You care where things are going to be in the future. The information from the sensors is just a clue towards the real goal of predicting the future. Knowing where something is and how fast it’s moving is a good start, but knowing what it is is just as important for knowing where it will be. Most of the objects on or near the road are not ballistic – a human is in charge and can change course. That’s why one of the key areas of research today is getting better at predicting what the others on the road, in particular the humans, are going to do. This can range from knowing driving behavior to figuring out if a pedestrian standing on the corner is about to enter the crosswalk or is surfing the web.
While several teams have made great progress, it turns out that people are better that today’s robots at predicting other people. Getting better at that is one of the key problems on the todo list, particularly in more complex environments like busy cities. Predicting the future also involves predicting how others will react to your own movements and the predicted movements of others. A lane merge or an unprotected left turn can be a dance with give and take, and robocars will constantly be trying to improve how they do.
Sensing faster
Sensors may only be a means to the real goal, but the better they do, the better you can predict that future. Teams are still looking to make sensors faster to make perception and prediction faster. One thing that’s important is knowing the speed of moving objects. Radar tells you that, but cameras and older LIDARs don’t, unless you look at multiple frames. Some newer LIDARs can tell you speed as well as distance. Looking at multiple frames takes at least as much time as taking the frames, but usually more.
One situation that can be a problem is moving on the highway behind a bigger vehicle. Imagine that ahead of that vehicle is a truck stalled on the shoulder, sticking into the lane. That happens with accidents and emergency vehicles a lot. Suddenly the big vehicle before you veers right to avoid the obstacle, and you see that stalled truck for the first time. You really don’t have much time to brake or veer, and you may not even have anywhere to go. If you have to look at 3 frames of video to see that it’s indeed not moving, that’s probably 1/10th of a second wasted, and this is a situation where it can matter. So a lot of teams are looking for ways to get that edge, and they have found it mostly in LIDARs that can measure “Doppler” to know the speed of everything they hit with the laser. Radars know speed too, but the world is full of stopped objects reflecting radar, and it’s hard to tell the stopped vehicle from the stopped guardrail next to it.
Taking the Long Way
I will briefly mention that a reason one famous team – Tesla
That’s part one. Part two looks at things like being a good citizen of the roads, why robocars are being deployed one town at a time instead of everywhere at once, and the problems of dealing with more mundane logistics like pulling over to pick up riders, business models, apps, and worrying too much about safety while getting governments and the public to accept you. I will also list a few factors that are being worked on but are not real blockers to deployment. Look for part two in the days to come.
Some feel the fact they don’t have or ride in a robocar in 2022 means development is way behind schedule. In reality, there never was a serious schedule, only hopes, but in fact, this list of problems bodes optimism, because these remaining problems seem generally tractable. Hard work and money, not breakthroughs are needed to deal with most of them.
Stay tuned for part two, in video and text form
Source: https://www.forbes.com/sites/bradtempleton/2022/09/26/why-dont-you-have-a-self-driving-car-yet–this-2-part-series-explains-the-big-remaining-problems/