Launch mode

We’re on the road. Last week saw the public launch event of the Driverless Futures? project. Thanks to Nesta and our funders, the Alan Turing Institute and the Economic and Social Research Council, we hosted 150 people for an evening discussion in the middle of London. We were also lucky to have minister for transport Jesse Norman on one of his rare days without a constitutionally groundbreaking parliamentary debate. The minister arrived (by bike) from Parliament to introduce the event.

He began with the opening sentence of one of our launch publications: ‘Less than a decade ago, self-driving cars seemed impossible. Now, we are told, they are inevitable.’

“I certainly don’t regard them as impossible, but I don’t regard them as inevitable either. That means there’s a gap in the middle that needs to be filled by intelligent reflection and good public policy… It’s a topic of enormous interest to me. I couldn’t be more delighted to see Driverless Futures taking it up as they have done.”

As anyone who has been following policy in this area would expect, he made much of the economic opportunities, but advocated a cautious approach, learning from the rapid emergence of cars in 20th-Century cities:

“That transition was not well-understood… These changes took place and there wasn’t that reflective understanding of where they were going… and it’s arguable that other countries made better choices than we did… It’s a cautionary tale, I think, for what can happen if you don’t adopt the kind of inclusive and comprehensive view to which Driverless Futures as a project is a potentially huge contributor.”

In response to audience questions about the possible ethical dilemmas raised by artificial intelligence in the wild, the minister hinted at his background in academic philosophy. He argued against the view the assumption that ethics could just be programmed into machines:

“Someone has made a decision within a company that a vehicle will have a certain set of outcomes when presented with a certain set of use cases, so when someone dies under those circumstances, it’s much easier potentially to say ’well this isn’t some random activity because the person was distracted etc. This is the result of a concrete human decision at some point’.”

He then took issue with the view, prevalent in regulatory debates, that new technologies require the tearing up and rewriting of existing laws. Norman’s view, following Edmund Burke, on whom he has literally written the book, is that “We have hundreds of years of liability law in this country… which has intelligence encoded within it.” The Burkean tradition is to be sceptical of innovation for innovation’s sake, which transport planners might agree is a sensible starting point.

For the panel discussion that followed, we added Lucy Yu from FiveAI, whose self-driving cars have just started trials on London’s streets, Sarah Castell from IpsosMORI, Steve Gooding from the RAC Foundation and Paul Nightingale, director of strategy at the ESRC. Each of them, from their different standpoints, highlighted the social complexities that should be acknowledged, but are often neglected, as exuberance builds around new technologies.

Earlier in the day, our project also benefitted from the collective wisdom of more than 40 stakeholders from the worlds of transport and tech. We made them discuss a host of topics from data-sharing to car-sharing and crash investigation to segregated highways. Their insights will help us hone our research questions over the summer.

We are enormously grateful to all of our workshop participants, speakers and audience members, as well as Nesta, ESRC and the Turing Institute.

We hope a video will be available shortly. Watch this space.

In the meantime, take a look at some of the publications we released to coincide with the launch:

Tom Cohen: Warning: we may be sleep-walking into an automated vehicle future, Intelligent transport

Is shared space dead?

Exhibition Road, one of the UK’s largest experiments in ‘shared space’, is to be remodelled, bringing back the apparatus that previously separated cars from pedestrians. This news comes a week after the death of Ben Hamilton-Baillie, one of the UK designers who fought hardest to rethink the way traffic is organised.

Photo from Flickr. Creative Commons

I have argued that shared space is a useful way to think about the democratisation of innovation. Rather than separating our technical and social interests, we should look for ways to mix them up, even if it means that the outcomes are messy and, potentially, slow. If we expect the public to stay in their lane, it is too easy for their concerns to be invisible. For the governance of innovation, there are various ways in which public concerns are contained. For artificial intelligence, it has become common to talk about ‘ethics’, as though that captures the worries that people might have (Google recently announced the creation of a new expert ethics board). The risk is that such things push the politics of AI aside, allowing the juggernaut of innovation to carry on at full speed, just as before. A shared space alternative might have involved processes of ongoing public engagement and the welcoming of external regulation.

However, shared space has always had its critics. Even when it works and the balance of power between cars and pedestrians is shifted, there are road users who remain vulnerable. Shared space in London has been wounded by campaigns from disability rights groups and a report by Lord Holmes.

For self-driving cars, the analogy may need to come back down to earth. The arrival of intelligent transport is likely to force upgrades in infrastructure that rationalise streets, making the intentional mess of shared space even less attractive. As with the arguments about shared space, these are likely to circle around questions of safety. But should safe space be our only concern?

Who killed Elaine Herzberg? One year on from the Uber crash

Today marks the anniversary of the crash in Tempe, Arizona, in which Elaine Herzberg became the first person to be killed by a self-driving car. (An excellent piece published yesterday at AZ Central updates the story).

Here is an excerpt from my forthcoming book about the need for governments to regain some control of new technologies. (A report from the National Transportation Safety Board, due this summer, will provide more details on the crash and valuable lessons for policymakers. I admit the recent Boeing 737 crash, which the NTSB is now investigating alongside the Uber one, makes my point about air safety seem already off-kilter).

NTSB schematic showing the point when Uber’s software determined emergency braking was needed to mitigate collision (from Flickr)

Who killed Elaine Herzberg? A prologue

Elaine Herzberg did not know that she was part of an experiment. She was pushing a bicycle, carrying heavy bags and moving slowly. It was 10pm on a dark desert night in Tempe, Arizona. She had crossed three lanes of a four-lane road before she was hit.

Herzberg was run down by a Volvo SUV travelling at 38 miles per hour and pronounced dead at 10:30pm. The next day, the officer in charge of the investigation rushed to blame the pedestrian. Police Chief Sylvia Moir told a local newspaper, “It’s very clear it would have been difficult to avoid this collision… she came from the shadows right into the roadway… the driver said it was like a flash.” According to the rules of the road, Herzberg should not have been there. Had she been at the crosswalk just down the road, things would probably have turned out differently.

Rafaela Vasquez was behind the wheel, but she wasn’t driving. The car, operated by Uber, was in autonomous mode. Vasquez’s job was to monitor the computer that was doing the driving and take over if anything went wrong. A few days after the crash, the police released a video from a camera on the rear-view mirror. It showed Vasquez looking down at her knees in the seconds before the crash and for almost a third of the 21-minute journey that led up to it. Data taken from her phone suggested that she had been watching an episode of ‘The Voice’ rather than the road. Embarrassingly for the police chief, her colleagues’ investigation calculated that, had Vasquez been looking at the road, she would have been able to stop more than 40 feet before impact.

Drivers and pedestrians make mistakes all the time. Official statistics attribute more than 90% of crashes to human error. The Tempe Police report concluded that the crash had been caused by human frailties on both sides: Herzberg should not have been in the road; Vasquez for her part should have seen the pedestrian, she should have taken control of the car and she should have been paying attention to her job. These factors are ‘proximate causes’, but if we focus on these we fail to learn from the novelty of the situation. Herzberg was the first pedestrian to be killed by a self-driving car. The Uber crash was not just a case of human error. It was also a failure of technology.

Here was a car on a public road in which the driving had been delegated to a computer. A thing that had very recently seemed impossible had become, on the streets of Arizona, mundane, so mundane that the person who was supposed to be checking the system had, in effect, switched off. The car’s sensors – 360-degree radar, short- and long-range cameras, a lidar laser scanner on the roof and a GPS system – were supposed to provide superhuman awareness of the surroundings. The car’s software was designed to interpret this information based on thousands of hours of similar experiences, drawing on vast quantities of data to identify objects, predict what they were going to do next and plot a safe route. This was artificial intelligence in the wild: not playing chess or translating text but steering two tonnes of metal.

When high-profile transport disasters happen in the US, the National Transportation Safety Board is called in. This organisation is less interested in blame than in how to learn from mistakes to make things safer. This laser focus on learning from disaster is one reason why air travel is so astonishingly safe. In 2017, for the first time, a whole year passed in which not a single person died in a commercial passenger jet crash. If self-driving cars are going to be as safe as aeroplanes, we should pay close attention to the NTSB. Their initial report on the Uber crash concluded that the car’s sensors had detected an object in the road six seconds before the crash. The software classified Herzberg “as an unknown object, as a vehicle, and then as a bicycle”, in the NTSB’s words, but the car continued. A second before the car hit Herzberg, the driver took the wheel but swerved only slightly. Vasquez only hit the brakes after the crash.

As well as the proximate causes, Elaine Herzberg’s death was the result of a set of more distant choices about technology and how it should be developed. Claiming that they were in a race against other manufacturers, Uber chose to test their system quickly and cheaply. Other self-driving car companies put two or more qualified engineers in each of their test vehicles. Vasquez was alone and she was no test pilot. The only qualification she needed before starting work was a driving licence.

Uber’s strategy filtered all the way down into its cars’ software, which was much less intelligent than the company’s hype had implied. As the company’s engineers worked out how to make sense of the information coming from the car’s sensors, they had to balance the risk of a false positive (detecting a thing that isn’t really there) against the risk of a false negative (failing to react to an object that turns out to be dangerous). After earlier tests of self-driving cars in which software overreacted to things like steam, plastic bags and shadows on the roads, engineers began to tune their systems in favour of false positives. The misidentification of Elaine Herzberg was partly the result of a conscious choice about how safe the technology needed to be in order to be safe enough. One engineer at Uber later told a journalist that the company had “refused to take responsibility. They blamed it on the homeless lady [Herzberg], the Latina with a criminal record driving the car [Vasquez], even though we all knew Perception [Uber’s software] was broken.”[

The companies who had built the hardware also blamed Uber. The president of Velodyne, who made the car’s lidar, claimed that their sensor would have seen the pedestrian. The company’s president told Bloomberg, ‘Certainly, our lidar is capable of clearly imaging Elaine and her bicycle in this situation. However, our lidar doesn’t make the decision to put on the brakes or get out of her way.’ Volvo made clear that they were not part of the testing. They provided the body of the car, not its brain. An automatic braking system that was built into the Volvo – using well-established technology – would almost certainly have saved Herzberg’s life. But this had been switched off by Uber engineers, who were testing their own technology and didn’t want interference from another system.

We don’t know what Elaine Herzberg was thinking when she set off into the road. Nor do we know exactly what the car was thinking. Machines make decisions differently from humans and these decisions are often inscrutable because of the complexity of machine learning. However, the evidence from the crash points to a reckless approach to the development of a new technology. The company shouldered some of the blame, agreeing an out-of-court settlement with the victim’s family and changing their approach to safety. But to point the finger only at the company would be to ignore the context. Roads are dangerous places, particularly in the US and particularly for pedestrians. A century of decisions by policymakers and carmakers has produced a system that gives power and freedom to drivers. Tempe, part of the sprawling metropolitan area of Phoenix, is car-friendly. The roads are wide and neat and the weather is good. It is an excellent place to test a self-driving car. For a pedestrian, the place and its infrastructure can feel hostile. The official statistics bear this out. In 2017, Arizona was the most dangerous state for pedestrians in the US.

In addition to the climate and the tidiness of its roads, Uber had been attracted to Tempe by the governor of Arizona, Doug Ducey. The company had started their testing in San Francisco, near their headquarters. But when one of their self-driving cars ran a red light, California regulators told Uber that they needed a $150 permit. Uber objected and Ducey seized his opportunity. With the Governor’s blessing, the company had already been testing in secret on the streets of Phoenix. Ducey could now go public and claim that he had tempted a big tech company away from Silicon Valley. He tweeted ‘This is what over-regulation looks like #ditchcalifornia’ and ‘Here in AZ we WELCOME this kind of technology & innovation! #ditchcalifornia #AZmeansBIZ’.

With almost no oversight (the committee Ducey created only met twice and had just one independent), Uber moved their experiments to Arizona in 2016. When Herzberg was killed less than 18 months later, Ducey’s enthusiasm collapsed and Uber were banned from their new laboratory. Members of Herzberg’s family thought that the design of city’s streets and the Governor’s embrace of Uber were causes of her death, and sued the state. Two months after the crash, the Governor of Ohio saw an opportunity, announcing plans to make his state ‘the wild, wild West’ for unregulated self-driving car testing.

This regulatory race to the bottom is unedifying, but the pattern is depressingly familiar. New technologies arrive in society without instructions for how they should be regulated. The rules for liability in self-driving car crashes have not yet been written. Nor have most countries discussed who should be in charge of developing the technology. It is not clear who is driving. And, despite claims that this technology is inevitable, it is unclear where we are going. It is not obvious what a future full of self-driving cars would look like. When accidents happen, it is hard to find the person responsible and easy for those involved to blame others or claim it was a freak occurrence. It is all too common for regulation to be an afterthought. In the world of aviation, it’s called a tombstone mentality: defects are noticed, lessons are learned and rules are written in grim hindsight. For cars, a century’s worth of lessons are known, but systematically ignored. Self-driving cars are an opportunity to do things differently, to govern new technologies so that they are safe, efficient and have widespread benefits. If we leave developers to their own devices, we will be wasting the potential of the technology.

This is an excerpt from ‘Who’s Driving? New technologies and the collaborative state’, to be published in 2019 by Palgrave.

Can members of the public help redirect new technologies?

A new paper in Science from a group of social scientists updates the evidence on public deliberation. The models, including Citizens’ Juries, Citizens’ Assemblies and participatory budgeting processes, have been around for a while. I and Driverless Futures colleagues have been involved in various ways over the years. Some of the things published by Demos, where I used to work, Involve, where I am a trustee, and Nesta, are worth looking at. (See also this episode of Ed Miliband’s Reasons to be Cheerful podcast, featuring Sarah Allan from Involve). For an academic discussion, there’s a paper I wrote with Simon Lock and James Wilsdon in 2014. These experiments in democracy are particularly relevant for science and technology, where the assumption has in the past been that the questions are too difficult for citizens.

There are a few reasons this is getting attention again now. First, the growth of populism. As politicians lean on the ‘will of the people’ to justify their policies, it becomes more important to find out what people really think and give them ways to genuinely influence their own futures. Second, policymakers are struggling to understand and control the power of technology companies. Third, there have been some high-profile examples in which deliberation has helped break policy deadlocks. In Ireland, a Citizens’ Assembly that preceded the referendum on abortion provided an interesting model for how not to screw up democracy.

We have recently been part of the team running the UK’s public dialogue exercise on self-driving cars, the report for which is due out this Summer. We helped 150 citizens in five locations discuss the technology, its implications and options for its governance. Without revealing any spoilers, the clear conclusion was that members of the public can and should help set the direction for new technologies. Watch this space for more…