I’ve just published a new book with Palgrave. It’s a short, provocative introduction to debates about new technologies, which takes self-driving cars as its case study. It’s available here, and I would love to know what readers think of it.
Free Thinking on Radio 3 is a great indulgence – an opportunity for long-form conversation, steered by expert broadcasters. I’ve been fortunate to appear twice recently. First, in a programme on cars, parking and motorways; second, in a discussion of whether Isaac Asimov’s psychohistory might help us understand the current enthusiasm for data and AI.
As the year comes to an end it feels important to share our project playlist. We’re taking requests.
Last month, the Driverless Futures? team and a few student volunteers set up shop in the Science Museum’s Driverless exhibition to get a feel for how the museum visitors thought about the possibilities and pitfalls of self-driving vehicles.
We made a little film:
I recently attended a talk given by Edmond Awad, one of the team that carried out the research project entitled Moral Machine, amongst other things. Run from MIT, Moral Machine is an online survey that presents respondents with a series of randomly generated dilemmas based on the so-called trolley problem. It sets up a series of scenarios in which an automated vehicle is on a collision course and the respondent is asked to choose between two courses of action that will have differing effects on both occupants of the vehicle and those crossing its path. In addition to being told the number of individuals at risk of fatal injury, respondents are also presented with information about their characteristics, ranging from age and gender to species (human, cat, dog), status, level of fitness and, even, housing status.
The survey has attracted a very large number of responses, with a paper in Nature reporting on the first 40 million “dilemma responses”.
This member of the audience found the whole thing very intriguing and came away thinking that trolley problems remain the question after the question. That is, before we ask whether an automated system should prioritise on the basis of age or gender, say, we should be asking whether automated systems should be allowed to prioritise at all. But the trolley problem has demonstrated an enduring appeal to professional and armchair philosophers alike for many years and we should not deny them their fun.
I got confused, though, after I asked Dr Awad whether any respondents had pointed out that no automated system would ever be able reliably to identify whether Person A was homeless (this being one of the attributes tested by Moral Machine). He replied that this wasn’t the issue: what his experiment was testing was people’s values and, in effect, concerned itself with the aftermath of any crash (whether the “right” individuals had died) than the algorithm the automated system might employ. Confused of Bloomsbury responded that the homeless/not homeless attribute in the survey therefore wasn’t included to test people’s attitudes to what automated vehicles might do. And Dr Awad replied that none of this was: no one responding to the survey actually expected automated vehicles to do as depicted in the scenarios. (I hope I represent his words fairly.)
If my understanding of what he said is correct, I have two comments. The first is that there is a world of difference between something not being possible now and not being possible ever. I suspect that most, if not all, of those who responded to the survey did so on the basis that an automated vehicle might one day be designed such that at least some of their preferences could be enacted. So I’m very dubious about seeing the entire experiment as speculative. If it was, why use automated vehicles at all? Why not use the conventional out-of-control trolley, if all that is sought is an understanding of people’s preferences concerning who should die in such scenarios and of their attitudes to acts and omissions? But I can’t believe that the AV element is nothing more than a MacGuffin: members of the research team have done other work on AVs and are clearly interested in ethical questions relating to this technology.
So, second, if I’m right that the researchers are asking about AVs because they are interested in AVs, the fact that the scenarios used in Moral Machine are manifestly beyond what artificial intelligence is likely ever to be able to do makes their inclusion at least frivolous and, possibly, irresponsible. Automated vehicles are the subject of intense discussion at present precisely because a variety of actors are hard at work bringing them onto our transport networks, with the high probability that humans (homeless or otherwise), dogs and cats will be involved in collisions. I think it’s therefore incumbent on any researcher purportedly doing serious research on the subject to stay within the bounds of what is reasonable. To give the team the benefit of the doubt, I suppose the research could meaningfully illuminate the debate if we were envisaging a future world in which humans are tagged with their attributes, including their housing situation, thus relieving the vehicle of having to discern such details. But that doesn’t even bear thinking about.
Ten days ago, journalist Ed Niedermeyer became one of the first people to have a trip in a genuinely driverless car on public roads without a nervous company demanding a non-disclosure agreement. Niedermeyer is no self-driving cheerleader. His work is well-researched, balanced and detached from the hype. My guess is that Waymo asked him as a display of their self-confidence.
There are currently hundreds of Waymo cars moving around the suburbs of Phoenix, Arizona, with thousands more on the way. The driverless future, it seems, is already here. Except that it isn’t. This technology is not an iPhone. It cannot just be bought and used. While for a few people in a limited area it may already be a reality, for most people in most places it will remain an impossibility for the foreseeable future. The people developing the tech are keen on announcing breakthroughs, which give some narrative zing to a story that has over the last year become a bit flat.
Oliver Cameron, the CEO of Voyage, is developing self-driving technology in a unique context. His vehicles are shuttling retirees on private roads around their purpose-built community – The Villages – in Florida.
I have admired Cameron since I heard him emphasise the value of a slow, responsible approach to technology development here. He has just published a discussion of what he sees as Waymo’s Rubicon-crossing ‘insane step forward’.
He thinks ‘we now live in a driverless world’. I’m worried he might believe it.
William Gibson had an adage:
‘The future has arrived — it’s just not evenly distributed yet.’
Technological determinists like to invoke it when they claim new technologies are inevitable and just around the corner. They rarely acknowledge that the distribution of new technologies is perpetually uneven. Technologies tend in fact to follow what Robert Merton called the Matthew Effect: For whosoever hath, to him shall be given, and he shall have more abundance.
Cameron knows that getting a self-driving car to work requires means getting the conditions right as well as smartening up the sensors and the software. Self-driving cars are unavoidably contextual. When talking about the progress of self-driving technology, Cameron is right to say ‘It’s a matter of where, not when’. There are circumstances in which the driverless future has been with us for a while. In London, the Docklands Light Railway was christened in 1987. It is relatively uninteresting because the conditions under which its automation works are so tightly constrained.
Waymo’s world is vastly more complex than the DLR’s, but it remains geofenced. The cars work because they know the roads and they know what to expect. Take a Waymo outside Phoenix and it becomes just another minivan. To get a genuine sense of the future, we need to ask why Waymo is in Phoenix, work out the conditions for the technology’s success (economic, political, meteorological, infrastructural and cultural) and ask where next? There will be places with wildly different conditions in which driverless technology will necessarily look very different, and countless more where the incentives for innovation will never line up.
The definition and distribution of futures will not be straightforward. Cameron expects public scrutiny and does not shy away from it:
‘There will likely be companies who abuse their responsibility to deploy this technology responsibly, making short-term decisions that may compromise safety.’
Following the recent revelations about the Uber crash (see my tweet thread is below), one can see the reputational risk.
However, if Voyage, Waymo and other developers want to innovate responsibly, they shouldn’t swallow their own exhaust. One can understand why innovators tell themselves
‘Fully self-driving technology is right… Fully self-driving technology is necessary.’
But they shouldn’t expect everyone else to agree. Public resistance may not be ‘a natural cycle that will one day pass’, as Cameron hopes. The futures that most of us experience will look very different from the ones imagined by technologists. They always are.