Seeing Under the Car Has Been a Low-Priority Edge Case
The edge case for recognizing something has gone wrong underneath self-driving vehicles has not been a priority, but an incident in San Francisco is shedding light
By John P. Desmond, Editor, AI in Business
Edge cases in autonomous driving are the ones seen as lower priority for the software developers to program for, such as what to tell the car to do if it’s dragging something underneath it.
So when a GM-owned Cruise autonomous car in San Francisco in October wound up dragging a jaywalking pedestrian underneath it for 20 feet before it stopped, according to a recent account in The Washington Post, questions were raised about how the car responded.
Following the crash, the California Department of Motor Vehicles suspended the company’s permits, and after that, Cruise halted its driverless testing program nationwide. That was followed in January by the Department of Justice and the Securities and Exchange Commission opening an investigation.
GM itself released the information that the pedestrian was dragged under the car for 20 feet, with the findings of a 100-page report on the incident prepared by a law firm hired by GM.
The judgment of Cruise executives was subsequently called into question, since they had not initially reported relevant details to the authorities.
It was a difficult turn of events for Cruise, which in the summer received permits to offer its automated robotaxi service in San Francisco, a major milestone in its nationwide plan.
The report, from Quinn Emanuel Urquhart & Sullivan, reconstructed the accident, describing a jaywalking pedestrian stepping into the busy intersection, getting hit by a human-driven car and being flung into the path of the autonomous vehicle, and then dragged for 20 feet before the car stopped.
‘Lack of Urgency in the Public Discourse’ Cited
Cruise and Waymo had been poised to expand their robotaxi services in San Francisco, Austin, Phoenix and Los Angeles after San Francisco issued its permits. “They are a real, albeit still marginal, part of the city’s transportation system,” stated an account in MIT Technology Review published in June just after the permits were issued, written by Benjamin Schneider, a journalist who spent a year writing about robotaxis for the San Francisco Examiner, and who had taken many rides in Cruise robotaxis.
“I’ve been struck by the lack of urgency in the public discourse about robotaxis,” he stated, presciently, as it turns out. “I’ve come to believe that most people, including many powerful decision makers, are not aware of how quickly this industry is advancing,” he added.
San Francisco has no regulatory authority over the robotaxis and police cannot legally cite them for moving violations, according to the account. Authorities in Texas and Arizona have been quicker to respond, with Texas changing its laws in 2017 to declare the owner of a driverless car is “considered the operator” and can be cited for breaking traffic laws, “regardless of whether the person is physically present in the vehicle,” according to a recent account from CNBC.
Arizona authorities revised the state’s traffic laws to declare that the owner of an autonomous vehicle “may be issued a traffic citation or other applicable penalty if the vehicle fails to comply with traffic or motor vehicle laws,” according to the account.
While self-driving car executives cite statistics saying that the cars are safer than those operated by humans, some observers see that more regulation is overdue.
“It seems like while they make fewer of the kind of mistakes that we see from human drivers, they make interesting new kinds of mistakes,” stated Irina Raicu, director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. “It has the feel of a human subject mass experiment, right? Without the kind of consent that we usually want to see as part of that.”
California state Senator Dave Cortese of San Jose recently launched a probe into how the DMV issues and revokes permits for driverless car companies in California, with a focus on safety standards and the recent issues involving the Cruise fleet. “We’re using the public square basically as a laboratory for trial and error,” he stated.
The CNBC account contained new reporting on details of the October Cruise pedestrian accident, stating that after coming to a complete stop, the car started up again and tried to pull over to the side of the road while the pedestrian remained trapped under the car. The pedestrian remained in serious condition in a San Francisco hospital as of December, although she was upgraded to “good condition.”
Since the accident, Cruise has fired nine executives and its CEO resigned, according to a recent account in the San Francisco Chronicle.
Lessons For Self-Driving Car Developers From Dr. Lance Eliot
Another observer of autonomous car development and software programming, Dr. Lance Eliot, the CEO of Techbrium, Inc., whose columns in Forbes, Bloomberg and other outlets have reached over 7.4 million views over the past several years, sees lessons in the Cruise pedestrian accident. In an email response to a query from AI in Business, Dr. Eliot stated:
“The unfortunate and distressing incident intertwines two key elements. One element is the lack of edge case accommodation; the other element appears to have been a problem in communication and poorly performed post-incident crisis management.
“Turns out that the post-incident actions were the most visible reputationally damaging impacts to GM Cruise and in a sense undermined attempts to later explain the technological facets and actual details. As reported widely, the firm apparently was not immediately forthcoming about what happened during the incident. Purportedly, the firm did not fully disclose the aspects involving the self-driving vehicle proceeding to pull over to the side of the road after the initial impact (which, dismally and disturbingly, led to dragging the stricken pedestrian). If indeed this was omitted during discussions with governmental authorities, some would contend that this is an example of where the (alleged) "cover-up" is in a sense reputationally worse than the original actions and overwhelmingly negated subsequent attempts to describe what the automation did and didn't do. Whether this communication difficulty was intentional or inadvertent is still not fully known.”
Car Could Not See the Alligator Underneath in Florida
Dr. Eliot expressed frustration that autonomous vehicles may not be scanning under the vehicle.
“From an edge-case perspective, I was not at all surprised that such an incident occurred. I have been stating for many years that the autonomous vehicle industry has generally expressed a lack of interest and pursuit in adding capabilities to self-driving cars entailing scanning under the vehicle itself. This is not a somehow unknown edge case. I say this because, at the time of this incident, some said that this edge case was supposedly surprising and previously never considered. That's just not true.
“ Years ago, for example, I had previously brought to light a situation in Florida whereby a person went up to a conventional car and an alligator was hiding under the vehicle. The person was taken aback and narrowly escaped injury. I reported on this and noted that this was once again a wake-up call for having self-driving cars equipped to inspect what might be under the vehicle. The use of cameras and other equipment could readily be used for this task. I also have written about the use of ground penetrating radar (GPR), another type of radar that could be used to look beneath a self-driving car. My point is that there are numerous ways to look under a self-driving car.
“Regrettably, this has taken a backseat to other considered more pressing issues when it comes to advancing self-driving cars. Perhaps a hard lesson learned here is that the cost and effort to include the under-the-car assessment capabilities should be given suitable attention.
“If we are aiming to have widespread public use of self-driving cars, the capability to detect and contend with whatever might be under the vehicle is a real issue and can be solved, if the makers of autonomous vehicles wish to do so. In my opinion, this is not an edge case and should instead be considered part and parcel of what any properly equipped self-driving car can do.”
Dr. Eliot gets the last word in this account.
See the source articles and information in The Washington Post, MIT Technology Review, CNBC, the San Francisco Chronicle and from Dr. Lance Eliot of Techbrium.