Bill Gates on self-driving vehicles

Bill Gates has just taken a self-driving ride around London and it’s got him excited. He was a passenger in a car run by Wayve (Microsoft is one of Wayve’s investors). Last year, I spoke to Wayve and some other folks for a podcast that explored how London represented a hard case for self-driving. The story goes that if the technology can make it there, it’ll make it anywhere. (Is it too much to call this a ‘Sinatra strategy’?).

It doesn’t really matter that Gates misrepresents the SAE’s levels of automation. Those levels aren’t very helpful anyway. But his comments about the Rules of the Road are revealing. Unsurprisingly, they reveal a mix of technocracy and technological determinism. Gates is not great at prediction, but that doesn’t stop him trying. He insists,

AVs will help create more equity for the elderly and people with disabilities by providing them with more transportation options.  And they’ll even help us avoid a climate disaster, since the majority in development are also electric vehicles.

We could easily work through scenarios that lead to opposite outcomes. The real question – What it would take to make the desirable outcomes more likely? – demands attention to policy as well as technology. Gates says, ‘Humanity has adapted to new modes of transportation before’ but doesn’t acknowledge that the adaptation and innovation have varied from place to place. Rather than sleepwalking into another technology, as many places did with motorcars, we should be more sophisticated this time around. Gates tells us ‘The rules of the road are about to change’. Changes are indeed likely, as we argue in this paper, but thankfully societies can have a say when it comes to rules. We don’t have to get pushed around by the predictions of technologists.

What we’ve learned from experiments in San Francisco and Phoenix – The Conversation

I’ve written a piece for the Conversation that reflects on Waymo and Cruise’s trials and tribulations in San Francisco. The conclusion is that these experiments, happening in public with the public as participants, urgently need to be democratised. The San Francisco transport authorities have been trying to do this, but without support to enforce data-sharing, they are relying on the goodwill of companies.

Evidence to the Transport Select Committee

The House of Commons Transport Select Committee have begun an inquiry into self-driving vehicles. The Driverless Futures project submitted written evidence. I was then invited to speak to the committee, alongside Becky Guy from the Royal Society for the Prevention of Accidents and Ian Wainwright from the Chartered Institute of Logistics and Transport (CILT). The footage of our morning’s session is here. I’m hoping this inquiry will be an important policy step. The Government’s planned legislation has been delayed. It was interesting to see a consensus, shared by those developing the technology and those asking questions about it, that regulation was necessary. Perhaps the inquiry can help spur this on.

CDEI report on Responsible Innovation in Self-Driving Vehicles

On Friday 19 August, the Government’s Centre for Data Ethics and Innovation published a report on Responsible Innovation in Self-Driving Vehicles. John McDermid and I were the expert advisers for this one, which had been gestating for a while before emerging alongside a broader plan for connected and automated mobility.

The CDEI report has some pretty wide-ranging recommendations. We spoke to questions of safety, privacy, transparency, public engagement and more. I did a thread on some of our conclusions:

There was plenty of interest in the report from the BBC. In addition to an online news piece, I was on the Today programme with Paul Newman from Oxbotica.

I also did the local radio rounds (BBC Oxford, Wiltshire, Cambridgeshire, Newcastle, Northampton, Stoke and Sheffield) and a bit on the BBC News Channel to end the day:

Self-driving cars on the horizon? – BBC Digital Planet

I was a guest on the BBC’s Digital Planet talking about self-driving vehicles, starting from the news that the UNECE will extend their regulations to allow high-speed vehicles in certain conditions. It was a pretty wide-ranging discussion, but my point was a simple one: self-driving will always be conditional; the real question is where, not when we’ll see the technology.

Should self-driving vehicles be labelled?

I have an Op-Ed over at MIT Technology Review arguing for the need to think about appropriate labels for self-driving vehicles.

There was also a news piece in the i newspaper reporting on our survey findings. And our press release found its way to a few other places too, like the Metro and EurekAlert. A commentary by Tom Chivers in the i got the wrong end of the stick, but the debate was, I thought, productive.

Reflections on Rafaela Vasquez

A long piece in Wired contains an interview with Rafaela Vasquez, the tragic figure who was behind the wheel when Elaine Herzberg became the first bystander to be killed by a self-driving car. Self-driving car developers, including Uber ATG, have been unwilling to acknowledge the true effects of the Herzberg crash in Tempe, but it has transformed the industry’s understanding of what is at stake in testing on public roads. Safety, which had been neglected for years, has been since 2018 forced to the front of innovators’ minds.

I’ve written before about the lessons from the crash and the difficulty of ascribing blame. Any crash is the product of multiple causes. We all, but especially self-driving car companies, tend to blame human error. So even though Rafaela Vasquez clearly did things wrong, she has become, to use Madeleine Elish’s phrase, a ‘moral crumple zone’ for a wider system. The Wired piece goes some way towards rehabilitating her, and in doing so, reveals some important new details.

First, there are insights about the sort of hidden labour that tech workers are used to employing. Vasquez had done a range of tech work…

“moderating grisly posts on Facebook, she says; tweeting about Dancing With the Stars from ABC’s Twitter; policing social media for Wingstop and Walmart.”

… before taking her ghost work to Uber’s ‘Ghost Town’. She and her colleagues were given training, but the well-known hazards of automation complacency were not given much attention. Uber’s aim was to “crush miles”, boasting of running 84,000 miles per week, often running the same loop repeatedly, emphasising quantity over quality, to impress management and their investors. Other safety drivers had been caught looking at their mobile phones while behind the wheel. But Uber prioritised crushing miles and saving money over safety culture. Just before the crash, they had reduced the number of people in the car from two to one as the automation improved.

The blame game afterwards has been unedifying:

“You can’t put the blame on just that one person,” says the Pittsburgh manager. “I mean, it’s absurd.” Uber “had to know this would happen. We get distracted in regular driving,” the manager says. “

Another insider told the reporter that the company was

“very clever about liability as opposed to being smart about responsibility.”

Once the layers got involved, the opportunity for real learning was cut. Both the company and Arizona’s government, who had been so desperate to get the company to Phoenix and so shocked, SHOCKED! that the company could misbehave, have been hit by subsequent lawsuits, but Vasquez remains a soft target.

The Wired piece suffers, as much of Wired’s reporting does, from a breathless need to emphasise the inevitability and desirability of tech. Vasquez was clearly excited by and supportive of a self-driving future, but the framing of her story is depressing. The piece concludes

“To reach that purported future, we must first weather the era we’re in now: when tech is a student driver… And inevitably, as experts have always warned, that means crashes”

To call crashes inevitable is a different sort of tragedy: a fatalistic technological determinism that will jeopardise future innovation. Elaine Herzberg’s death was the result of choices that are becoming increasingly clear. Things could have gone differently and they should be redirected in the light of our new knowledge.

The Law Commission and the Daily Telegraph

The Law Commission have published their long-awaited report on automated vehicles, which refers at a few points to the submissions we made (here and here) to their consultations.

The Daily Telegraph chose to run a feature on self-driving tech, for which I provided some quotes.

This is the relevant section from the story, which is behind a paywall (although you can see it in the tweet thread).

For years, we have been told that such AVs are just around the corner, ready to take us wherever we want. They never are. In some confined, well-regimented places like ports, or even the suburbs of Phoenix, Arizona, driverless vehicles already shuttle about. “The problem is,” says Jack Stilgoe, professor of science policy at UCL, “most of the world does not look like Phoenix.” It’s much more complicated. And the learning-by-doing methods that often inform AI are simply impossible on the roads.
It was trial and error, for example, that helped a Google computer learn, and then master, the complexities of first Chess and then the boardgame Go. Other researchers have even let a tiny aircraft crash almost 12,000 times as its computer learnt to fly it. But the skies are comparatively empty. And letting computers learn by getting it wrong as children jump out from behind parked cars is unthinkable.
The result is that forecasts have got more pessimistic. Chris Urmson, former head of Google’s AV team, thinks they will only be phased in over the next 50 years.
On inspection, there is good reason for that long wait. For it is beginning to dawn that fully autonomous vehicles will never be able to manage on our messy streets. “The story we are told about self-driving cars is that the technology will be able to adapt to the world and that the world needs to do nothing,” says Stilgoe. “I think we know that’s a lie. The world always adapts to meet the technology.”
In other words, AVs will never drive in the current environment. Rather, we will change our environment to enable AVs to drive. To the human eye, a traffic light, even with the sun behind it, is a straightforward signal. To a computer, judging its shade and illumination in such circumstances can be extremely hard. Far easier would be to have a “smart traffic light” that communicates stop and go signals digitally and wirelessly to the car’s computer. Piece by piece, over the next decades, we will “upgrade” and adapt our world in this way to suit AI. “The world has been designed to be human-readable,” says Stilgoe. “AI needs the world to be AI-readable.”
That could well involve other changes, aimed at simplifying our complex roads for AI’s benefit. Reserved lanes for AVs, perhaps; more strict enforcement of speed limits; greater surveillance, where ubiquitous cameras ping real- time data back and forth. “These are relay consequential discussions we need to have,” says Stilgoe. “It’s not just about plucking out the human driver and switching in a computer and nothing else changes.”
Of course, one of those critical discussions will be about “how safe is safe enough” when it comes to computer drivers. The Trolley Problem will not go away. But that is only part of the greater truth we must grapple with – that ushering in the age of AVs will not only mean a new code of ethics, it will mean re-engineering the world.
“The sooner we acknowledge that,” says Stilgoe, “the better.”