The House of Commons Transport Select Committee have begun an inquiry into self-driving vehicles. The Driverless Futures project submitted written evidence. I was then invited to speak to the committee, alongside Becky Guy from the Royal Society for the Prevention of Accidents and Ian Wainwright from the Chartered Institute of Logistics and Transport (CILT). The footage of our morning’s session is here. I’m hoping this inquiry will be an important policy step. The Government’s planned legislation has been delayed. It was interesting to see a consensus, shared by those developing the technology and those asking questions about it, that regulation was necessary. Perhaps the inquiry can help spur this on.
On Friday 19 August, the Government’s Centre for Data Ethics and Innovation published a report on Responsible Innovation in Self-Driving Vehicles. John McDermid and I were the expert advisers for this one, which had been gestating for a while before emerging alongside a broader plan for connected and automated mobility.
The CDEI report has some pretty wide-ranging recommendations. We spoke to questions of safety, privacy, transparency, public engagement and more. I did a thread on some of our conclusions:
I also did the local radio rounds (BBC Oxford, Wiltshire, Cambridgeshire, Newcastle, Northampton, Stoke and Sheffield) and a bit on the BBC News Channel to end the day:
I was a guest on the BBC’s Digital Planet talking about self-driving vehicles, starting from the news that the UNECE will extend their regulations to allow high-speed vehicles in certain conditions. It was a pretty wide-ranging discussion, but my point was a simple one: self-driving will always be conditional; the real question is where, not when we’ll see the technology.
I have an Op-Ed over at MIT Technology Review arguing for the need to think about appropriate labels for self-driving vehicles.
There was also a news piece in the i newspaper reporting on our survey findings. And our press release found its way to a few other places too, like the Metro and EurekAlert. A commentary by Tom Chivers in the i got the wrong end of the stick, but the debate was, I thought, productive.
A long piece in Wired contains an interview with Rafaela Vasquez, the tragic figure who was behind the wheel when Elaine Herzberg became the first bystander to be killed by a self-driving car. Self-driving car developers, including Uber ATG, have been unwilling to acknowledge the true effects of the Herzberg crash in Tempe, but it has transformed the industry’s understanding of what is at stake in testing on public roads. Safety, which had been neglected for years, has been since 2018 forced to the front of innovators’ minds.
I’ve written before about the lessons from the crash and the difficulty of ascribing blame. Any crash is the product of multiple causes. We all, but especially self-driving car companies, tend to blame human error. So even though Rafaela Vasquez clearly did things wrong, she has become, to use Madeleine Elish’s phrase, a ‘moral crumple zone’ for a wider system. The Wired piece goes some way towards rehabilitating her, and in doing so, reveals some important new details.
First, there are insights about the sort of hidden labour that tech workers are used to employing. Vasquez had done a range of tech work…
“moderating grisly posts on Facebook, she says; tweeting about Dancing With the Stars from ABC’s Twitter; policing social media for Wingstop and Walmart.”
… before taking her ghost work to Uber’s ‘Ghost Town’. She and her colleagues were given training, but the well-known hazards of automation complacency were not given much attention. Uber’s aim was to “crush miles”, boasting of running 84,000 miles per week, often running the same loop repeatedly, emphasising quantity over quality, to impress management and their investors. Other safety drivers had been caught looking at their mobile phones while behind the wheel. But Uber prioritised crushing miles and saving money over safety culture. Just before the crash, they had reduced the number of people in the car from two to one as the automation improved.
The blame game afterwards has been unedifying:
“You can’t put the blame on just that one person,” says the Pittsburgh manager. “I mean, it’s absurd.” Uber “had to know this would happen. We get distracted in regular driving,” the manager says. “
Another insider told the reporter that the company was
“very clever about liability as opposed to being smart about responsibility.”
Once the layers got involved, the opportunity for real learning was cut. Both the company and Arizona’s government, who had been so desperate to get the company to Phoenix and so shocked, SHOCKED! that the company could misbehave, have been hit by subsequent lawsuits, but Vasquez remains a soft target.
The Wired piece suffers, as much of Wired’s reporting does, from a breathless need to emphasise the inevitability and desirability of tech. Vasquez was clearly excited by and supportive of a self-driving future, but the framing of her story is depressing. The piece concludes
“To reach that purported future, we must first weather the era we’re in now: when tech is a student driver… And inevitably, as experts have always warned, that means crashes”
To call crashes inevitable is a different sort of tragedy: a fatalistic technological determinism that will jeopardise future innovation. Elaine Herzberg’s death was the result of choices that are becoming increasingly clear. Things could have gone differently and they should be redirected in the light of our new knowledge.
The Daily Telegraph chose to run a feature on self-driving tech, for which I provided some quotes.
This is the relevant section from the story, which is behind a paywall (although you can see it in the tweet thread).
For years, we have been told that such AVs are just around the corner, ready to take us wherever we want. They never are. In some confined, well-regimented places like ports, or even the suburbs of Phoenix, Arizona, driverless vehicles already shuttle about. “The problem is,” says Jack Stilgoe, professor of science policy at UCL, “most of the world does not look like Phoenix.” It’s much more complicated. And the learning-by-doing methods that often inform AI are simply impossible on the roads.
It was trial and error, for example, that helped a Google computer learn, and then master, the complexities of first Chess and then the boardgame Go. Other researchers have even let a tiny aircraft crash almost 12,000 times as its computer learnt to fly it. But the skies are comparatively empty. And letting computers learn by getting it wrong as children jump out from behind parked cars is unthinkable.
The result is that forecasts have got more pessimistic. Chris Urmson, former head of Google’s AV team, thinks they will only be phased in over the next 50 years.
On inspection, there is good reason for that long wait. For it is beginning to dawn that fully autonomous vehicles will never be able to manage on our messy streets. “The story we are told about self-driving cars is that the technology will be able to adapt to the world and that the world needs to do nothing,” says Stilgoe. “I think we know that’s a lie. The world always adapts to meet the technology.”
In other words, AVs will never drive in the current environment. Rather, we will change our environment to enable AVs to drive. To the human eye, a traffic light, even with the sun behind it, is a straightforward signal. To a computer, judging its shade and illumination in such circumstances can be extremely hard. Far easier would be to have a “smart traffic light” that communicates stop and go signals digitally and wirelessly to the car’s computer. Piece by piece, over the next decades, we will “upgrade” and adapt our world in this way to suit AI. “The world has been designed to be human-readable,” says Stilgoe. “AI needs the world to be AI-readable.”
That could well involve other changes, aimed at simplifying our complex roads for AI’s benefit. Reserved lanes for AVs, perhaps; more strict enforcement of speed limits; greater surveillance, where ubiquitous cameras ping real- time data back and forth. “These are relay consequential discussions we need to have,” says Stilgoe. “It’s not just about plucking out the human driver and switching in a computer and nothing else changes.”
Of course, one of those critical discussions will be about “how safe is safe enough” when it comes to computer drivers. The Trolley Problem will not go away. But that is only part of the greater truth we must grapple with – that ushering in the age of AVs will not only mean a new code of ethics, it will mean re-engineering the world.
“The sooner we acknowledge that,” says Stilgoe, “the better.”
As part of the ESRC festival of social science, I joined a multidisciplinary panel to discuss some of the ways in which algorithms and society shape each other