Blog

Tesla, Autopilot and safety culture

I was asked by the BBC to comment on a story from a Tesla whistleblower. Lukasz Krupski had leaked data to Handelsblatt that, he claimed, showed Tesla were aware not just of safety problems in their factories, but also with their Autopilot and ‘full self-driving’ systems. The BBC wanted to know about the broader context, given that the Government have a forthcoming Automated Vehicles bill. The story was on BBC Breakfast and Radio 4’s Today programme on 5th Dec 2023 (forward to 1 hour 52 mins).

Bill Gates on self-driving vehicles

Bill Gates has just taken a self-driving ride around London and it’s got him excited. He was a passenger in a car run by Wayve (Microsoft is one of Wayve’s investors). Last year, I spoke to Wayve and some other folks for a podcast that explored how London represented a hard case for self-driving. The story goes that if the technology can make it there, it’ll make it anywhere. (Is it too much to call this a ‘Sinatra strategy’?).

It doesn’t really matter that Gates misrepresents the SAE’s levels of automation. Those levels aren’t very helpful anyway. But his comments about the Rules of the Road are revealing. Unsurprisingly, they reveal a mix of technocracy and technological determinism. Gates is not great at prediction, but that doesn’t stop him trying. He insists,

AVs will help create more equity for the elderly and people with disabilities by providing them with more transportation options.  And they’ll even help us avoid a climate disaster, since the majority in development are also electric vehicles.

We could easily work through scenarios that lead to opposite outcomes. The real question – What it would take to make the desirable outcomes more likely? – demands attention to policy as well as technology. Gates says, ‘Humanity has adapted to new modes of transportation before’ but doesn’t acknowledge that the adaptation and innovation have varied from place to place. Rather than sleepwalking into another technology, as many places did with motorcars, we should be more sophisticated this time around. Gates tells us ‘The rules of the road are about to change’. Changes are indeed likely, as we argue in this paper, but thankfully societies can have a say when it comes to rules. We don’t have to get pushed around by the predictions of technologists.

What we’ve learned from experiments in San Francisco and Phoenix – The Conversation

I’ve written a piece for the Conversation that reflects on Waymo and Cruise’s trials and tribulations in San Francisco. The conclusion is that these experiments, happening in public with the public as participants, urgently need to be democratised. The San Francisco transport authorities have been trying to do this, but without support to enforce data-sharing, they are relying on the goodwill of companies.

Evidence to the Transport Select Committee

The House of Commons Transport Select Committee have begun an inquiry into self-driving vehicles. The Driverless Futures project submitted written evidence. I was then invited to speak to the committee, alongside Becky Guy from the Royal Society for the Prevention of Accidents and Ian Wainwright from the Chartered Institute of Logistics and Transport (CILT). The footage of our morning’s session is here. I’m hoping this inquiry will be an important policy step. The Government’s planned legislation has been delayed. It was interesting to see a consensus, shared by those developing the technology and those asking questions about it, that regulation was necessary. Perhaps the inquiry can help spur this on.

CDEI report on Responsible Innovation in Self-Driving Vehicles

On Friday 19 August, the Government’s Centre for Data Ethics and Innovation published a report on Responsible Innovation in Self-Driving Vehicles. John McDermid and I were the expert advisers for this one, which had been gestating for a while before emerging alongside a broader plan for connected and automated mobility.

The CDEI report has some pretty wide-ranging recommendations. We spoke to questions of safety, privacy, transparency, public engagement and more. I did a thread on some of our conclusions:

There was plenty of interest in the report from the BBC. In addition to an online news piece, I was on the Today programme with Paul Newman from Oxbotica.

I also did the local radio rounds (BBC Oxford, Wiltshire, Cambridgeshire, Newcastle, Northampton, Stoke and Sheffield) and a bit on the BBC News Channel to end the day:

Self-driving cars on the horizon? – BBC Digital Planet

I was a guest on the BBC’s Digital Planet talking about self-driving vehicles, starting from the news that the UNECE will extend their regulations to allow high-speed vehicles in certain conditions. It was a pretty wide-ranging discussion, but my point was a simple one: self-driving will always be conditional; the real question is where, not when we’ll see the technology.

Should self-driving vehicles be labelled?

I have an Op-Ed over at MIT Technology Review arguing for the need to think about appropriate labels for self-driving vehicles.

There was also a news piece in the i newspaper reporting on our survey findings. And our press release found its way to a few other places too, like the Metro and EurekAlert. A commentary by Tom Chivers in the i got the wrong end of the stick, but the debate was, I thought, productive.

Reflections on Rafaela Vasquez

A long piece in Wired contains an interview with Rafaela Vasquez, the tragic figure who was behind the wheel when Elaine Herzberg became the first bystander to be killed by a self-driving car. Self-driving car developers, including Uber ATG, have been unwilling to acknowledge the true effects of the Herzberg crash in Tempe, but it has transformed the industry’s understanding of what is at stake in testing on public roads. Safety, which had been neglected for years, has been since 2018 forced to the front of innovators’ minds.

I’ve written before about the lessons from the crash and the difficulty of ascribing blame. Any crash is the product of multiple causes. We all, but especially self-driving car companies, tend to blame human error. So even though Rafaela Vasquez clearly did things wrong, she has become, to use Madeleine Elish’s phrase, a ‘moral crumple zone’ for a wider system. The Wired piece goes some way towards rehabilitating her, and in doing so, reveals some important new details.

First, there are insights about the sort of hidden labour that tech workers are used to employing. Vasquez had done a range of tech work…

“moderating grisly posts on Facebook, she says; tweeting about Dancing With the Stars from ABC’s Twitter; policing social media for Wingstop and Walmart.”

… before taking her ghost work to Uber’s ‘Ghost Town’. She and her colleagues were given training, but the well-known hazards of automation complacency were not given much attention. Uber’s aim was to “crush miles”, boasting of running 84,000 miles per week, often running the same loop repeatedly, emphasising quantity over quality, to impress management and their investors. Other safety drivers had been caught looking at their mobile phones while behind the wheel. But Uber prioritised crushing miles and saving money over safety culture. Just before the crash, they had reduced the number of people in the car from two to one as the automation improved.

The blame game afterwards has been unedifying:

“You can’t put the blame on just that one person,” says the Pittsburgh manager. “I mean, it’s absurd.” Uber “had to know this would happen. We get distracted in regular driving,” the manager says. “

Another insider told the reporter that the company was

“very clever about liability as opposed to being smart about responsibility.”

Once the layers got involved, the opportunity for real learning was cut. Both the company and Arizona’s government, who had been so desperate to get the company to Phoenix and so shocked, SHOCKED! that the company could misbehave, have been hit by subsequent lawsuits, but Vasquez remains a soft target.

The Wired piece suffers, as much of Wired’s reporting does, from a breathless need to emphasise the inevitability and desirability of tech. Vasquez was clearly excited by and supportive of a self-driving future, but the framing of her story is depressing. The piece concludes

“To reach that purported future, we must first weather the era we’re in now: when tech is a student driver… And inevitably, as experts have always warned, that means crashes”

To call crashes inevitable is a different sort of tragedy: a fatalistic technological determinism that will jeopardise future innovation. Elaine Herzberg’s death was the result of choices that are becoming increasingly clear. Things could have gone differently and they should be redirected in the light of our new knowledge.