A new episode of the Human and the Machine podcast features Jack Stilgoe, alongside Guardian tech journalist Alex Herne, discussing Autopilots in the air and self-driving cars on the ground. Listen here.
Brian Merchant, one of the most interesting technology reporters writing today, has reported that Tesla has recently changed the way it talks about its Autopilot technology. The previous language of ‘full self-driving hardware’, which I discussed in this paper, has gone. Tesla now talks about ‘Autopilot features today, and full self-driving capabilities in the future’.
Merchant’s Gizmodo piece featured some insights from Driverless Futures? research. Tesla is of course unlikely to admit whether the piece influenced its change. And some jurisdictions still think that the word ‘Autopilot’ is itself part of the problem.
We’re on the road. Last week saw the public launch event of the Driverless Futures? project. Thanks to Nesta and our funders, the Alan Turing Institute and the Economic and Social Research Council, we hosted 150 people for an evening discussion in the middle of London. We were also lucky to have minister for transport Jesse Norman on one of his rare days without a constitutionally groundbreaking parliamentary debate. The minister arrived (by bike) from Parliament to introduce the event.
He began with the opening sentence of one of our launch publications: ‘Less than a decade ago, self-driving cars seemed impossible. Now, we are told, they are inevitable.’
“I certainly don’t regard them as impossible, but I don’t regard them as inevitable either. That means there’s a gap in the middle that needs to be filled by intelligent reflection and good public policy… It’s a topic of enormous interest to me. I couldn’t be more delighted to see Driverless Futures taking it up as they have done.”
As anyone who has been following policy in this area would expect, he made much of the economic opportunities, but advocated a cautious approach, learning from the rapid emergence of cars in 20th-Century cities:
“That transition was not well-understood… These changes took place and there wasn’t that reflective understanding of where they were going… and it’s arguable that other countries made better choices than we did… It’s a cautionary tale, I think, for what can happen if you don’t adopt the kind of inclusive and comprehensive view to which Driverless Futures as a project is a potentially huge contributor.”
In response to audience questions about the possible ethical dilemmas raised by artificial intelligence in the wild, the minister hinted at his background in academic philosophy. He argued against the view the assumption that ethics could just be programmed into machines:
“Someone has made a decision within a company that a vehicle will have a certain set of outcomes when presented with a certain set of use cases, so when someone dies under those circumstances, it’s much easier potentially to say ’well this isn’t some random activity because the person was distracted etc. This is the result of a concrete human decision at some point’.”
He then took issue with the view, prevalent in regulatory debates, that new technologies require the tearing up and rewriting of existing laws. Norman’s view, following Edmund Burke, on whom he has literally written the book, is that “We have hundreds of years of liability law in this country… which has intelligence encoded within it.” The Burkean tradition is to be sceptical of innovation for innovation’s sake, which transport planners might agree is a sensible starting point.
For the panel discussion that followed, we added Lucy Yu from FiveAI, whose self-driving cars have just started trials on London’s streets, Sarah Castell from IpsosMORI, Steve Gooding from the RAC Foundation and Paul Nightingale, director of strategy at the ESRC. Each of them, from their different standpoints, highlighted the social complexities that should be acknowledged, but are often neglected, as exuberance builds around new technologies.
Earlier in the day, our project also benefitted from the collective wisdom of more than 40 stakeholders from the worlds of transport and tech. We made them discuss a host of topics from data-sharing to car-sharing and crash investigation to segregated highways. Their insights will help us hone our research questions over the summer.
We are enormously grateful to all of our workshop participants, speakers and audience members, as well as Nesta, ESRC and the Turing Institute.
We hope a video will be available shortly. Watch this space.
In the meantime, take a look at some of the publications we released to coincide with the launch:
Tom Cohen: Warning: we may be sleep-walking into an automated vehicle future, Intelligent transport
To coincide with our official launch event (attended by the Minister, no less), I wrote a piece published by Intelligent Transport entitled “Warning: we may be sleep-walking into an automated vehicle future”. You can read it here: https://www.intelligenttransport.com/transport-articles/77961/warning-we-may-be-sleep-walking-into-an-automated-vehicle-future/
The project has its launch workshop and public event tomorrow. We are lucky to have the Transport Minister, Jesse Norman MP, speaking as part of a superb panel.
I have published a comment piece at Nature Machine Intelligence, outlining why I think self-driving cars are harder than some of their developers would like to believe.
We’ll be reporting from the event once it’s over.
I was asked to do a UCL Lunch Hour Lecture. These are open to the public and aim to be accessible introductions to research that is going on at UCL. I did mine on the politics of self-driving cars. And I had a special guest:
Exhibition Road, one of the UK’s largest experiments in ‘shared space’, is to be remodelled, bringing back the apparatus that previously separated cars from pedestrians. This news comes a week after the death of Ben Hamilton-Baillie, one of the UK designers who fought hardest to rethink the way traffic is organised.
I have argued that shared space is a useful way to think about the democratisation of innovation. Rather than separating our technical and social interests, we should look for ways to mix them up, even if it means that the outcomes are messy and, potentially, slow. If we expect the public to stay in their lane, it is too easy for their concerns to be invisible. For the governance of innovation, there are various ways in which public concerns are contained. For artificial intelligence, it has become common to talk about ‘ethics’, as though that captures the worries that people might have (Google recently announced the creation of a new expert ethics board). The risk is that such things push the politics of AI aside, allowing the juggernaut of innovation to carry on at full speed, just as before. A shared space alternative might have involved processes of ongoing public engagement and the welcoming of external regulation.
However, shared space has always had its critics. Even when it works and the balance of power between cars and pedestrians is shifted, there are road users who remain vulnerable. Shared space in London has been wounded by campaigns from disability rights groups and a report by Lord Holmes.
For self-driving cars, the analogy may need to come back down to earth. The arrival of intelligent transport is likely to force upgrades in infrastructure that rationalise streets, making the intentional mess of shared space even less attractive. As with the arguments about shared space, these are likely to circle around questions of safety. But should safe space be our only concern?