Issues in Science and Technology invited me to respond to a recent piece on AV safety assurance by Marjory Blumenthal and Laura Fraade-Blanar. I’ve pasted my letter here. It’s also worth reading the response from the excellent Professor Missy Cummings.
Theres an old Irish joke in which a man stops to ask for directions and is told, ‘Well, I wouldn’t start from here’. When looking for guidance on governing new technologies, our starting points are often the problem. It is hard to escape our existing technological systems and our existing frames of reference. This is the case with automated vehicles. AVs promise transformative benefits, in particular for road safety. But they also bring a critical safety question of their own: how safe does an AV have to be to be safe enough?
In Can Automated Vehicles Prove Themselves to Be Safe? (Issues, Summer 2020), Marjory S. Blumenthal and Laura Fraade-Blanar are right to argue for a concerted approach to AV safety, and they are right that public trust will be crucial. But trust cannot be bought; it has to be earned. The process of standard-setting must be an inclusive one. Safety is too important to be left to technology developers alone. The test will not be whether AV developers can prove to themselves that their technology is safer than conventional driving. The question How safe is safe enough? is one for society at large, and it is profoundly uncertain, in part because it is connected to an even more complicated question: Safe enough for what?
The comparison with conventional cars is a poor starting point. The risks of cars are, as the novelist J. G. Ballard once put it, a ‘pandemic cataclysm’. More than a million deaths a year is a huge price, even if the benefits of the technology are clear. Developers of AVs may be aiming to clear the extremely low bar of overall road safety, but citizens will have other ideas. Aggregate improvements in safety, even if they are substantial, will do little to reassure parents when a child is killed by an AV and it is not clear who is responsible.
Everything we know about public risk perception tells us that people will evaluate the risks of new and potentially inequitable systems very differently from the risks of driving. In a car, we kid ourselves that we are in control of our destiny, which is one reason why we are willing to accept, according to some measures, more than 100 times greater risk than on an airplane or a train. It is not at all obvious that, as Blumenthal and Fraade-Blanar suppose, people’s perceptions of AV risk will be ‘grounded… in peoples experience with the safety of traditional automobiles’. What if people think of AVs like trains and conclude that each death marks a new catastrophe? The authors’ analogy may work in one important sense: with AVs, we may well see extreme distrust of attempts by powerful industries to govern themselves in the public interest.
Being trapped in an automotive frame of reference has other implications for how we think about AVs. Much of the excitement surrounding AVs comes from artificial intelligence. The story is that AI will be able to mimic and then surpass human capability without getting drunk, distracted, or sleepy. However, where we have seen automated systems deliver real benefits for transport, it is not thanks to super-intelligence, but rather because of a well-understood, well-defined system. Blumenthal and Fraade-Blanar talk about the gradual expansion of an AV’s operational design domain (ODD) as the technology becomes more sophisticated. My concern is that in many cases an ODD will be constrained to suit the AV, rather than the AV improved to match an ODD. In the name of safety, AVs may be given their own lanes, cyclists may be shepherded away from roads, and pedestrians may be asked to behave more predictably.
If we do not have an open, democratic debate about AV safety, we risk getting pushed around by the hype of the industry.
Science and Technology Studies
University College London