In SF, We Find Out What Happens When Cops Pull Over A Cruise Self-Driving Car

Last week in San Francisco, some officers from the SFPD decided to “pull over” a vacant self-driving car from the Cruise robotaxi pilot, which operates in some regions of SF at night. In the encounter, the officers seem a bit confused over what is happening, and the bystanders watching the encounter are clearly laughing at the situation. These vehicles are very common in San Francisco today, and so members of the public immediately identified them.

Police initially stopped the vehicle because it was driving at night without its lights on. Cruise states that the lights were off due to a human error.

This event was interesting because perhaps one of the most common questions 10 years ago by newcomers learning about self-driving cars was “What happens when police want to ticket a self-driving car?” People were curious about “who” would get the ticket, or how one would pull over such a car. This isn’t the first police encounter, but perhaps the first with a vacant vehicle.

The reality was it wasn’t that big a question, and in particular a self-driving car doing something that would merit a ticket should be an extremely rare, and never repeated event. In such a rare event, the next question would be whether the police need to stop the car because it represents a danger to public safety, or simply report the event and fine those responsible. That would be up to police, but in general most events would be in the latter class and not require pulling over the vehicle.

Nonetheless, the advanced self-driving teams have all developed systems to handle this eventuality, and published guides which they distribute to local law enforcement about how to interact with the cars. Typically the cars have systems to watch for the flashing lights of emergency vehicles, and usually to hear their sirens, and act as required by law. Cruise has a guide for first responders but perhaps these police were not aware of both Cruise and the guide, which of course is going to happen. The vehicle did appear to pull over for the police, though after stopping the first time and being approached on foot, it moved forward a bit and stopped more fully, with blinkers on. The move and blinkers were ordered by Cruise remote advisors when they were brought in to handle the traffic stop.

Why pull it over?

If a vehicle is doing things that are dangerous, police might decide it must be stopped immediately. Absent that, however, they are better off just telling dispatch the licence plate and having them contact the robocar company at is first responder contact desk, and let them issue commands to the vehicle to pull over, or possibly return in a slow “safe mode” to base.

A vehicle is unlikely to deliberate violate the law, but mistakes can happen, and in this case there was a bug which did not turn the lights on. That’s a bit odd as Cruise only operates at night. The difference with robots is that once a mistake is identified, the team will fix the bug as quickly as possible and the mistake will never happen again. If a human driver makes an illegal left turn, you can give them a ticket but that does little to stop other humans from repeating that mistake. Not so for robots. Indeed, if other companies hear about a ticket, they would work to make sure they don’t make the same mistake. It makes more sense for the government to work together with robocar companies to find and fix any such problems, rather than leaving it to patrol cars.

In this case though, since it was a human error, there is nothing in the car’s software to fix, but Cruise does claim it has fixed the issue that led to the error, which one might speculate means that they will train the human advisors better or perhaps put in a warning to remind them they have turned off the lights when they should not have.

(As it happens, a robocar like the Cruise will see very well with its lights off. The primary sensor, the LIDAR, is illuminating the scene in invisible infrared light and sees slightly better in the dark. The cameras benefit from the lights though in a city they might see perfectly well without them. Other cars, on the other hand, need those lights to see the robocar, so leaving them off was a mistake and hopefully Cruise now has steps in place to warn against them being left off at night.)

It is worth noting that one of Waymo’s most famous mistakes, where a car got heavily confused by the cones of a construction lane and was stuck for a very long time, fleeing its own rescue staff, was caused by a mistake made by a remote human operator.

Cruise’s procedure for a traffic stop is to move the vehicle to a safe stopping location. In this case, the remote advisors were involved and picked a spot just cross the intersection. That seems a bit unusual once police are out of their vehicle and near the car, and may have been a deviation from the best procedure. According to Cruise the vehicle will also do this on its own when it detects a traffic stop. Admittedly sometimes humans stopping for police are unsure where to do it and will make moves a bit like this, but rarely would they move once police are on foot.

There are cases where robocars could and should violate the law, because the law is wrong. For example, they should move at the speed of traffic, even if that’s above the speed limit. Almost all humans do this, and to not do it is to impede traffic. This should be worked out with local legal authorities, and should not result in tickets. At a grander scale, there are elements of the vehicle code that make no sense for robots, and should not apply to them — but again, this should be worked out in a meeting room, not on the street.

What if they have to?

Even though it shouldn’t happen much, as this incident shows, sometimes traffic stops will happen. In the case of Cruise, when it identifies it has been stopped by police, it looks for a safe place to pull over, and then displays a phone number on its main screen for the cops to call. It appears the remote operations center was involved even before that number was called, as it was the remote advisor who directed the vehicle to cross the intersection to a safe spot. There, the police came to the vehicle again and saw the number and called it. Cruise states it has been training all first responders to look for and call the number.

Normally any encounter should be brief. The car should get on its way, or if police decide to exercise their power to order something else, the robocar company would send an employee in another vehicle to come get the car and drive it to HQ, or even get it towed. In this case, Cruise staff just turned on the lights and resolved the problem.

While this is early, it seems the process could go better. There should be no need for police to phone a number most of the time — the car should detect the police stop on its own and immediately bring remote ops in — possibly using an external speaker before rolling down the window to talk to them (if safe.) Generally, the car should not drive off once police are walking towards it, unless instructed by police. This was a rare example of where a vehicle was continuing to operate in an unsafe state, and needed to be stopped to correct the state.

Source: https://www.forbes.com/sites/bradtempleton/2022/04/13/in-sf-we-find-out-what-happens-when–cops-pull-over-a-cruise-self-driving-car/