The Argument Over Whether Tesla FSD Will Run Over A Child Or Dummy Child Misses The Point

A recent video from TeslaTSLA
-critic Dan O’Dowd of the “Dawn Project” shows a test track with a road made of cones. There, a driver using Tesla’s FSD prototype drives towards a child crash-dummy and hits it. It is argued that Telsa FSD does not reliably brake for children. Critics of O’Dowd alleged that the video was faked or badly created, and some did their own demos, including, to the distress of all, doing the test with actual live “volunteer” children. Few considered the question of whether a prototype like Tesla FSD is supposed to reliably brake for children, which is the real interesting issue.

O’Dowd has written frequently that he wants Tesla FSD banned. I covered his campaign previously, and he recently did a run for the California Senate nomination solely to run anti-Tesla FSD campaign ads. He has naturally raised the ire of Elon Musk and the vocal pro-Tesla community. As such there was immediate sparring. The original video linked above did not show FSD activated, though later releases showed it was. Some suggested that hard to read screen images said the driver was holding down the pedal. Other data show the car slowing down or issuing warnings, and arguments have gone back and forth about how real the demonstration is.

ADVERTISEMENT

The demonstration was chosen to be provocative, since nothing is more scary than cars hitting children. 3 children die in car crashes every day in the USA, and each year around 170 child pedestrians are killed by automobiles. For many, the reaction is that no technology that would ever run over a child is acceptable. NHTHT
SA did begin an investigation into the deployment of Tesla Autopilot (which is a released product) and is starting to look at the FSD prototype. Others, such as Ralph Nader, have also made calls to take the FSD prototype off the road. After a very small number of people repeated the test with real children, the NHTSA issued a warning not to do this, and Youtube pulled videos of people doing it. Tesla has reportedly asked that the videos of their car hitting the test dummy also be removed. The Washington Post reports seeing a Cease-and-Desist letter from Tesla.

This issue is very complex, and as is not too unusual, nobody gets it exactly right. The FSD system, though called a beta, is more accurately called a prototype. Self-driving prototypes (and betas) do need large amounts of testing on the road in the view of most developers, and every team does this, with human “safety drivers” monitoring the system and regularly intervening when it makes mistakes to prevent incidents. Tesla is unusual in that it is allowing ordinary customers to engage in this testing, while all other companies have employees, with some level of training do this task, and commonly have 2 employees per vehicle.

Prototypes, by their nature, fail, including on important things like stopping for pedestrians in the road. Every team from the best like Waymo down to the worst, have put vehicles on the road which would regularly do something seriously bad except for intervention, and most feel doing testing of such early stage vehicles was and still is necessary to make progress and eventually deploy the cars. Once deployed, the cars will be of much higher quality and save lives — lots of them — so everybody wants that deployment to happen as soon as possible, but there are many issues to discuss about how we get to that point and when we have reached it.

ADVERTISEMENT

Driver Assist vs. Self Driving

In addition to the question of whether Tesla’s use of customers to test their prototype is a good idea, many issues revolve around the difference between driver-assist systems, which involve a human fully engaged in the drive, but supervising the system rather than physically moving the controls, and self-driving systems, where no human supervision is needed (and indeed, the vehicle can run with nobody in it.)

Many industry insiders feel that these are two fairly different things, and that it was an error for NHTSA to declare them as just two different “levels” of automation technology. Sterling Anderson, co-founder of Aurora self-driving, thinks that moving from driver assist (or ADAS, for advanced driver assist) to self-driving is like trying to get to the moon by building taller ladders.

The first big advanced driver assist system that let a person take their feet off the pedals was cruise control, in particular adaptive cruise control. Later, lanekeeping arrived which let you take your hands off the wheel, and soon the two were combined in products like Tesla “Autopilot.” As ADAS (driver assist) tools these are meant to be used with full attention.

ADVERTISEMENT

While not everybody believed it at first, the general conclusion today is that these systems work, and do not create a danger on the road. People were doubtful of even basic cruise control at first, but it soon became a very common feature on cars.

Tesla Autopilot raised new questions for a variety of reasons, but the most interesting issue revolves around the fact that is is clearly superior in functionality to earlier products, including simpler cruise controls — yet that superiority could actually make it more dangerous, and thus inferior. An odd paradox arises, where the better the system is, the worse the result might be, because the superior system induces an “automation complacency” where the supervising driver pays less attention to the road. (Some also believe that Tesla’s “Autopilot” and FSD names encourage this complacency, and the public messaging around these products does so as well.)

This is not a good paradox to have. We want to develop better systems, but if a system gets worse overall as you make it better, it’s much harder to get really good systems as you travel through a valley of danger where things get worse before they get better. Today, we would never fault a standard cruise control for the fact that it will hit a child or blow through a red light — those systems were much simpler and had no capability at all to avoid those mistakes. But many want to fault the much superior system which stops for most obstacles because it doesn’t stop for 100% — even though no system will ever be so perfect as to avoid 100% of errors, and though humans also will not be perfect.

ADVERTISEMENT

I participated in the drafting of the first self-driving testing laws in the world, in Nevada and California. The players agreed there should be some basic rules about how to do testing on the roads (previous rules did not prohibit it, since of course nobody ever thought to do so.) At the same time, the automakers, who sell ADAS tools, did not want their ADAS cars to be subject to self-driving test regulations, so ADAS operation was carved out as not covered by those regulations.

In California, in particular, this created an open question. All self-driving testing (with a few recent exceptions) is done with a supervising safety driver. In that way it’s very much like driving an ADAS car. The law was not clear on the distinction which has created some interesting situations. Anthony Levandowski, who also participated in the drafting of the regulations, was later head of UberUBER
ATG. He declared that since all their cars at the time only operated with a safety driver, this was ADAS, and Uber did not need to follow the self-driving testing regulations. The California DMV said no, that under this logic nobody was testing self-driving, and that was not the intent of the law. They told Uber that they needed to register as self-driving test vehicles, or they would pull the licence plates of the Uber cars using their special powers. Uber complied.

ADVERTISEMENT

The DMV took the approach that if you were were trying to make a self-driving system, even though it was an early incomplete one that needed supervision and was always supervised, you should be considered a self-driving testing company governed by the rules.

In spite of this, Tesla continues the same approach. Since Tesla definitely does have an ADAS product, they never report doing any self-driving testing to the state. So far the DMV has let this slide — even the testing of Tesla FSD by Tesla employees which is very hard to distinguish with what they stopped Uber from doing. The DMV might well regulate this. But what about customer use, where it’s officially (if you read the fine print and ignore the name) a driver assist tool?

But it seems to work

The main solution for this has been ways to assure supervisory attention remains high. There are various techniques for this, including monitoring the driver in various ways and nagging them if they are not paying attention, or as noted to have trained professional drivers or even a team of them with multiple eyes on the road. For many decades we have trained teen-age drivers in driving schools by having an instructor who can grab the wheel and has their own brake to stop the car, and that system has worked very well.

ADVERTISEMENT

The strong conclusion so far is that this works. The safety record of major self-driving companies like Waymo is exemplary. Waymo reported over 20 million miles of testing with safety drivers in 2021, and in that time has had perhaps 2 at-fault accidents. Human beings, on average, will have closer to 40 accidents in that period of time. Just as driving students with driving instructors do much better than freshly licensed drivers, the monitoring system clearly works, and these vehicles actually create less risk to the public than similar driving by ordinary people would.

In the USA at least, if nobody is getting hurt — or at least if the level of risk is less than that of ordinary driving — an activity would generally not be regulated. The USA approach is much more permissive — you don’t have to prove what you are doing is safe, but if it turns out to be unsafe you may get stopped and found liable for any harm you caused. Some other countries would require regulators to decide in advance if something is safe, a much higher bar which is much less conducive to innovation.

Of course, in one famous case, the safety driver approach had a fatality, when Uber ATG’s vehicle killed a pedestrian in Arizona. The NTSB investigation and later court cases found the safety driver to be negligent (she was watching TV instead of doing her job) though Uber was also faulted for having a poor culture of managing their safety drivers which contributed to this error. The important conclusion, however, is that the safety driver system works and does not itself put the public at undue risk, though it is obviously possible for human safety drivers to be negligent and cause high risk.

ADVERTISEMENT

That’s the system with trained paid drivers. Tesla goes further, and has ordinary customers do the job. There have been a number of incidents where Tesla drivers have clearly been negligent in supervising the Autopilot product, and crashes, including fatal ones, have taken place. However, Tesla cars drive vastly more miles with Autopilot than any self-driving team does, so the presence of negative and even tragic events is not necessarily evidence that the system is exposing the public to greater risk.

Each quarter, Tesla publishes misleading statistics which state that drivers using Autopilot have better safety records than those who don’t, even though some Autopilot users are negligent. While these numbers are tantamount to a lie, I and others have attempted to reverse engineer the real numbers, and the real numbers are not that bad, and suggest that Autopilot users have a similar safety record to non-users. While it is not superior, neither is it exposing people to additional risk. The result is close enough that NHTSA is conducting an investigation on Tesla accidents with emergency vehicles. It is unknown if they will take the precautionary approach or look at the overall safety record.

Many have suggested that Tesla could improve their record with better driver monitoring. The standard monitoring simply demands the driver regularly apply force to the wheel. Other companies have cameras which watch the eyes of the driver to make sure they are watching the road — had Uber ATG done this, they would have prevented their fatality. Tesla has recently started using driver gaze monitoring as well.

ADVERTISEMENT

It should be noted that the calculation that shows Tesla Autopilot safety as similar to regular driving is a utilitarian one. It is actually the combination of a higher accident rate among a small cadre of negligent Autopilot users who ignore the road or get into automation complacency, and a better safety record from those who are diligent. We have a philosophical hard time with this — we don’t like making some people have higher risk even though more people get lower risk. We don’t like it so much that we might deprive the diligent people of a tool that makes them safer to protect those who are negligent, though this is not our goal.

What about FSD

The data above concern Autopilot, which is a shipping product. Tesla FSD is not a shipping product. They call it a “beta” (which is an almost finalized product in its last testing phase before release) but it’s not that at all. They now have over 100,000 Tesla owners trying it out. In order to use it, the owner has to pay a fee (now rising to $15,000) to pre-order the eventual FSD system, and pass a sort of safe driving test to be admitted to the testing cohort. The safe driving test is largely bogus and doesn’t test what you would actually want for this task, but it does mean not everybody who pays gets it.

They warn drivers that the system had many bugs and needs constant supervision like ADAS, saying, “It will do the wrong thing at the worst time.” That’s a fairly clear statement, but at the same time the name “full self driving” obviously conjures up an image different from ADAS. I have driven with this system and judged its performance to be quite poor as a self-driving system.

ADVERTISEMENT

Even so, in spite of intuitions to the contrary, Tesla’s approach seems to be working. With 100,000 drivers, the system is undergoing millions of miles of operation (though there is no public data on how much each driver uses the system.) We also know there is a great deal of public scrutiny on any accidents involving the testing of the FSD system, and only a small number of minor incidents have become public. While some Tesla FSD testers are such big Tesla fans that they might hide an accident they had, it’s extremely unlikely that significant numbers of major accidents are happening with none being revealed. Any serious police reported accidents, especially injury accidents, would be very likely to get attention, as they do for Autopilot.

And yet we don’t see them. While this is not a true scientific analysis, it seems likely that Tesla’s FSD program is not currently putting the public at risk. This might be attributed to how bad the system is. Most drivers know it would be really stupid to not pay attention to this system in its current state. It is possible that as it gets better, this would change. If you consider basic cruise control, which is much more primitive, it works because nobody would dare not pay attention to the road while using it.

ADVERTISEMENT

Tesla’s Naming

Tesla doesn’t help itself with its product names. Many have been critical of Autopilot as a name, because the (false) public perception is that aircraft autopilots fly the plane on their own. In fact, an aircraft autopilot is a vastly, vastly simpler system than Tesla’s car Autopilot, and it only works because up in the air, you’re very far from anything you might hit. That doesn’t stop the false public perception, though. With FSD, it’s even worse. It’s not at the level of self-driving yet, not even close, and it’s certainly not “full.” This confusion led Waymo to give up on using the term self-driving to describe their product, out of fear that Tesla had mis-defined the term in the public mind.

Tesla doesn’t really need more hype for their products. They sell all they can make. It is perplexing that they deliberately go with these names rather than understating and overdelivering. They would save themselves a lot of grief — other than that of the “all publicity is good publicity” kind.

So is it OK if a Tesla will hit a child sized dummy?

The surprising answer is “probably.” All test prototype systems will do things like this, and short of forbidding all testing of prototypes, there is no way to demand perfection even on such an evocative issue as crashes with children. It is difficult to craft a regulatory regime that would ban Tesla FSD that doesn’t also ban or slow down the development of very important technology that will in time save millions of lives. Millions of lives is no small thing, it will be one of the greatest advancements in safety in human history.

ADVERTISEMENT

At most, we could try to define things so that those who supervise such test systems meet certain criteria. We might insist they get better monitoring (which Tesla is doing.) We might ask they be professional or pass a test, but then we would create a body that defines and enforces these tests, which would be cumbersome. You can become a driving instructor just by passing a written test and getting a certificate of good health, no live skill test is done.

Nonetheless, we could define a set of rules for who can supervise a prototype system, and it might stop Tesla’s activity. But to do so would be to fall into the paradox of “the better it is, the more we don’t like it.” We don’t want to make it harder to cross that valley to becoming a better system. The real question is why we would want to even go there if nobody is being hurt. Right now there are not reports of people being hurt, certainly not at any levels that would indicate there is more risk than that from ordinary driving. We don’t want to fall into the trap of banning things because we have an intuition that they are dangerous. We only want to ban things that actually are dangerous.

It’s quite possible that Tesla FSD testing will later start being too dangerous. This could happen because of automation complacency, or changes to the system. It is reasonable that Tesla be asked to report to authorities how many incidents take place during testing of the system, so we can know if and when it becomes dangerous. O’Dowd’s approach of “we should ban this because it seems to me it would be dangerous” is the wrong one, at least in the USA.

ADVERTISEMENT

Read and leave comments here

Source: https://www.forbes.com/sites/bradtempleton/2022/08/25/the-argument-over-whether-tesla-fsd-will-run-over-a-child-or-dummy-child-misses-the-point/