CogX – A festival of artificial intelligence

CogX is an annual ‘Festival of AI and emerging technology’ in London. Alongside all of the talk about technical advances and business opportunities, there runs an ethics stage, where a few of us were able to develop conversations about the governance of AI. I was speaking as part of a panel organised by Beth Singler from the University of Cambridge. My bit is at 28 minutes in, but you should certainly watch Jeanette Winterson in full flow, (starting at 20 mins).

Doing dialogue: A talk at AVS 2019

Jack Stilgoe at AVS 2019. (Photo credit: AUVSI)

Last week I gave a talk to the Automated Vehicles Symposium in Orlando. This is the big annual meeting on all things self-driving, and it grows every year. This year, there were thousands of people there. Thousands. It was fascinating to observe and be a part of. This is what I said.

As a European visiting America, I’m aware that technological futures can turn out very differently in different places. For all the similarities, our governments are different, our industries our different, our cultures are different and our transport systems are different. Nothing is inevitable, and we shouldn’t pretend to know what the future of automated vehicles looks like. 

I’m interested in how we can have a better debate about the the possibilities and uncertainties of self-driving vehicles. This is where technology meets democracy. Conversations in this space can often be one-sided, with the questions, answers and the terms of debate determined by the proponents of a particular technology. I want to make the case for a more balanced dialogue. 

I’m going to offer five lessons from social science that has studied past controversies about technology – particularly the controversy around genetically modified crops in Europe – and then offer some insights from a recent exercise in public dialogue that took place in the UK. 

At the end of the 20th Century, GMOs were a technology full of promise. Scientists were excited about the technical possibilities of more precise crop improvement, and companies saw clear economic opportunities. Alongside realistic proposals for incremental improvement ran hyped-up claims that the technology would benefit everyone, particularly the world’s poorest people. Some in Europe disagreed. So while GMOs have become a fact of life in the US, in much of Europe they can’t be consumed and can’t be grown. A public backlash meant that companies have missed out on markets, scientists have missed out on research opportunities and farmers and consumers have missed out on new innovations.

Until the GM crops controversy, a lot of scientists, tech developers and policymakers thought they understood public concerns about new technology. They thought that if people understood the science they would trust and accept the technology. We saw movements in the 1980s and 90s towards what became known as the ‘Public Understanding of Science’. The assumption was that to know science was to love it. This assumption was wrong. People, often the most educated people, were unwilling to just accept the answers that scientists were offering. They had their own questions. 

So this is the first lesson: Debates about new technology are never just about science and technology. This is especially true with technologies that don’t exist yet. People will understand the technology in a range of different contexts. 

This leads to the second lesson: People are citizens as well as consumers. If Automated Vehicles are going to change the world, people will want to have a say. We have already seen research on whether people are comfortable paying for or getting in an AV. This is only a small part of the picture. People will have their own questions, and they won’t just relate to whether or not the technology works as expected.

The third lesson: It’s about more than safety. With GM crops, the developers of the technology assumed that public concerns would be dominated by questions of risk – will it be safe to eat? In fact, people also had concerns about effects on the environment, the ownership of the technology, inequalities in terms of who would benefit and more besides. 

So the fourth lesson: People in power need to listen as well as talk. We need to understand what people’s real hopes and fears for AVs are. The uncertainties here are huge. We have heard a lot about standards for AV safety, but we still have no idea how safe is safe enough? Do people think being safer than a human driver on average is acceptable? My hypothesis would be not, but we don’t know. Levels of acceptable risk can vary by orders of magnitude even among different transport modes. We don’t know whether people will have concerns about who owns AV data. We don’t know how people will balance values like privacy against convenience. We don’t know what people think about the interpretability of machine learning. We don’t know whether it matters to people if this is public transport or private; personal or shared. We don’t know how all of these things will vary from place to place. So we need to listen. But the conversation can’t end there. If innovators are going to ask people what they think, they need to respond; they need to say how they are going to change direction in response. Otherwise it is public engagement for engagement’s sake. 

The fifth and final lesson: Be clear on why you are doing public engagement. If it’s to sell a particular technology, or to lobby for policy change, be honest about that. People will see right through it if not. Is it to persuade or is it to empower? Is it to open up the debate to new perspectives or to close it down?

In the UK, we’ve been doing a large public dialogue exercise on behalf of the Government’s Centre for Connected and Automated Vehicles. It involved more than 150 members of the public in five locations around the UK, with each group meeting three times over a two-month period. The report is still being finalised, but a few quotes from the discussions suggest that members of the public would like to put some new questions on the table.

“Infrastructure has been my biggest issue.”

Facilitator – “Will the infrastructure need to change?”

“It’ll have to.”


Facilitator – “Who should pay for the infrastructure?”

“Users pay. I don’t think taxpayers should pay.”

Sciencewise dialogue participants

The dominant story about self-driving cars is that they will change the world without changing the world. The focus is on artificial intelligence, suggesting that the task is to mimic and then improve upon human drivers. It overlooks what else might need to happen for the technology to really work. In our dialogues, people picked up on this, and were sceptical. They thought that roads and the behaviours of other road users would need to change if AVs are going to work.

“There will be risks. We will learn from accidents, but I do not want my family to be those on the back of which the learning happens.”

Sciencewise dialogue participant

People understand that if the technologies are going to work, they will need to be tested, and tested in the real world. Some people thought this would be risky, and wondered therefore if the balance between risks and benefits would be fair.

“Cars were liberating for the working classes and older people. This seems to be restricting choice.”

Sciencewise dialogue participant

There was a lot of excitement about the potential benefits of AVs, but people wondered who would benefit. Would the technology be liberating or would it lock us in and make us dependent on a technology that people felt they had little control over? 

“Is there a need for it in a village? If they don’t have it, they’ll be stuck.”

“So what you’re saying is that people in the countryside can’t get one of your motors [AVs]? That’s a bit unfair isn’t it?”

Sciencewise dialogue participants

Finally, it is worth noting that, while there is a lot of talk about when self-driving cars will arrive, there is less consideration of where. Some participants, particularly those from rural communities, wondered if the technology would really make a difference to their lives in the foreseeable future.  For the people developing and regulating the technology, these issues are challenging. These questions do not have easy answers. But they will be a part of how the technology is defined by the public. To ignore them would be to risk being surprised in the way that developers of genetically modified crops were two decades ago.

AVS in Orlando, before kick-off (Photo credit: Jack Stilgoe)

Tech companies need to open up

Interesting piece in the Columbia Journalism Review from Brian Merchant, one of the more interesting tech reporters out there, and author of a forthcoming book on the Luddites. He is discussing tech companies’ tactics in engaging with journalists. He notes that,

“Silicon Valley wasn’t always so hostile to reporters. It used to be relatively open. Apple, probably more than any other company, snapped it closed… as Silicon Valley found itself in an ever-expanding position of power, journalism was contracting.”

He goes on to mention a recent story he did on Tesla, for which I provided some insight. He was shocked by Tesla’s reaction to his story and my comments. It reveals something important about how defensive tech companies can become when their stories are challenged. If self-driving vehicles are going to change the world, the people developing the tech are going to have to get used to the fact that they can’t completely control the conversation.

Tesla changes its language on ‘Full-self driving’

Brian Merchant, one of the most interesting technology reporters writing today, has reported that Tesla has recently changed the way it talks about its Autopilot technology. The previous language of ‘full self-driving hardware’, which I discussed in this paper, has gone. Tesla now talks about ‘Autopilot features today, and full self-driving capabilities in the future’.

Merchant’s Gizmodo piece featured some insights from Driverless Futures? research. Tesla is of course unlikely to admit whether the piece influenced its change. And some jurisdictions still think that the word ‘Autopilot’ is itself part of the problem.

Launch mode

We’re on the road. Last week saw the public launch event of the Driverless Futures? project. Thanks to Nesta and our funders, the Alan Turing Institute and the Economic and Social Research Council, we hosted 150 people for an evening discussion in the middle of London. We were also lucky to have minister for transport Jesse Norman on one of his rare days without a constitutionally groundbreaking parliamentary debate. The minister arrived (by bike) from Parliament to introduce the event.

He began with the opening sentence of one of our launch publications: ‘Less than a decade ago, self-driving cars seemed impossible. Now, we are told, they are inevitable.’

“I certainly don’t regard them as impossible, but I don’t regard them as inevitable either. That means there’s a gap in the middle that needs to be filled by intelligent reflection and good public policy… It’s a topic of enormous interest to me. I couldn’t be more delighted to see Driverless Futures taking it up as they have done.”

As anyone who has been following policy in this area would expect, he made much of the economic opportunities, but advocated a cautious approach, learning from the rapid emergence of cars in 20th-Century cities:

“That transition was not well-understood… These changes took place and there wasn’t that reflective understanding of where they were going… and it’s arguable that other countries made better choices than we did… It’s a cautionary tale, I think, for what can happen if you don’t adopt the kind of inclusive and comprehensive view to which Driverless Futures as a project is a potentially huge contributor.”

In response to audience questions about the possible ethical dilemmas raised by artificial intelligence in the wild, the minister hinted at his background in academic philosophy. He argued against the view the assumption that ethics could just be programmed into machines:

“Someone has made a decision within a company that a vehicle will have a certain set of outcomes when presented with a certain set of use cases, so when someone dies under those circumstances, it’s much easier potentially to say ’well this isn’t some random activity because the person was distracted etc. This is the result of a concrete human decision at some point’.”

He then took issue with the view, prevalent in regulatory debates, that new technologies require the tearing up and rewriting of existing laws. Norman’s view, following Edmund Burke, on whom he has literally written the book, is that “We have hundreds of years of liability law in this country… which has intelligence encoded within it.” The Burkean tradition is to be sceptical of innovation for innovation’s sake, which transport planners might agree is a sensible starting point.

For the panel discussion that followed, we added Lucy Yu from FiveAI, whose self-driving cars have just started trials on London’s streets, Sarah Castell from IpsosMORI, Steve Gooding from the RAC Foundation and Paul Nightingale, director of strategy at the ESRC. Each of them, from their different standpoints, highlighted the social complexities that should be acknowledged, but are often neglected, as exuberance builds around new technologies.

Earlier in the day, our project also benefitted from the collective wisdom of more than 40 stakeholders from the worlds of transport and tech. We made them discuss a host of topics from data-sharing to car-sharing and crash investigation to segregated highways. Their insights will help us hone our research questions over the summer.

We are enormously grateful to all of our workshop participants, speakers and audience members, as well as Nesta, ESRC and the Turing Institute.

We hope a video will be available shortly. Watch this space.

In the meantime, take a look at some of the publications we released to coincide with the launch:

Tom Cohen: Warning: we may be sleep-walking into an automated vehicle future, Intelligent transport