Who killed Elaine Herzberg? One year on from the Uber crash

Today marks the anniversary of the crash in Tempe, Arizona, in which Elaine Herzberg became the first person to be killed by a self-driving car. (An excellent piece published yesterday at AZ Central updates the story).

Here is an excerpt from my forthcoming book about the need for governments to regain some control of new technologies. (A report from the National Transportation Safety Board, due this summer, will provide more details on the crash and valuable lessons for policymakers. I admit the recent Boeing 737 crash, which the NTSB is now investigating alongside the Uber one, makes my point about air safety seem already off-kilter).

NTSB schematic showing the point when Uber’s software determined emergency braking was needed to mitigate collision (from Flickr)

Who killed Elaine Herzberg? A prologue

Elaine Herzberg did not know that she was part of an experiment. She was pushing a bicycle, carrying heavy bags and moving slowly. It was 10pm on a dark desert night in Tempe, Arizona. She had crossed three lanes of a four-lane road before she was hit.

Herzberg was run down by a Volvo SUV travelling at 38 miles per hour and pronounced dead at 10:30pm. The next day, the officer in charge of the investigation rushed to blame the pedestrian. Police Chief Sylvia Moir told a local newspaper, “It’s very clear it would have been difficult to avoid this collision… she came from the shadows right into the roadway… the driver said it was like a flash.” According to the rules of the road, Herzberg should not have been there. Had she been at the crosswalk just down the road, things would probably have turned out differently.

Rafaela Vasquez was behind the wheel, but she wasn’t driving. The car, operated by Uber, was in autonomous mode. Vasquez’s job was to monitor the computer that was doing the driving and take over if anything went wrong. A few days after the crash, the police released a video from a camera on the rear-view mirror. It showed Vasquez looking down at her knees in the seconds before the crash and for almost a third of the 21-minute journey that led up to it. Data taken from her phone suggested that she had been watching an episode of ‘The Voice’ rather than the road. Embarrassingly for the police chief, her colleagues’ investigation calculated that, had Vasquez been looking at the road, she would have been able to stop more than 40 feet before impact.

Drivers and pedestrians make mistakes all the time. Official statistics attribute more than 90% of crashes to human error. The Tempe Police report concluded that the crash had been caused by human frailties on both sides: Herzberg should not have been in the road; Vasquez for her part should have seen the pedestrian, she should have taken control of the car and she should have been paying attention to her job. These factors are ‘proximate causes’, but if we focus on these we fail to learn from the novelty of the situation. Herzberg was the first pedestrian to be killed by a self-driving car. The Uber crash was not just a case of human error. It was also a failure of technology.

Here was a car on a public road in which the driving had been delegated to a computer. A thing that had very recently seemed impossible had become, on the streets of Arizona, mundane, so mundane that the person who was supposed to be checking the system had, in effect, switched off. The car’s sensors – 360-degree radar, short- and long-range cameras, a lidar laser scanner on the roof and a GPS system – were supposed to provide superhuman awareness of the surroundings. The car’s software was designed to interpret this information based on thousands of hours of similar experiences, drawing on vast quantities of data to identify objects, predict what they were going to do next and plot a safe route. This was artificial intelligence in the wild: not playing chess or translating text but steering two tonnes of metal.

When high-profile transport disasters happen in the US, the National Transportation Safety Board is called in. This organisation is less interested in blame than in how to learn from mistakes to make things safer. This laser focus on learning from disaster is one reason why air travel is so astonishingly safe. In 2017, for the first time, a whole year passed in which not a single person died in a commercial passenger jet crash. If self-driving cars are going to be as safe as aeroplanes, we should pay close attention to the NTSB. Their initial report on the Uber crash concluded that the car’s sensors had detected an object in the road six seconds before the crash. The software classified Herzberg “as an unknown object, as a vehicle, and then as a bicycle”, in the NTSB’s words, but the car continued. A second before the car hit Herzberg, the driver took the wheel but swerved only slightly. Vasquez only hit the brakes after the crash.

As well as the proximate causes, Elaine Herzberg’s death was the result of a set of more distant choices about technology and how it should be developed. Claiming that they were in a race against other manufacturers, Uber chose to test their system quickly and cheaply. Other self-driving car companies put two or more qualified engineers in each of their test vehicles. Vasquez was alone and she was no test pilot. The only qualification she needed before starting work was a driving licence.

Uber’s strategy filtered all the way down into its cars’ software, which was much less intelligent than the company’s hype had implied. As the company’s engineers worked out how to make sense of the information coming from the car’s sensors, they had to balance the risk of a false positive (detecting a thing that isn’t really there) against the risk of a false negative (failing to react to an object that turns out to be dangerous). After earlier tests of self-driving cars in which software overreacted to things like steam, plastic bags and shadows on the roads, engineers began to tune their systems in favour of false positives. The misidentification of Elaine Herzberg was partly the result of a conscious choice about how safe the technology needed to be in order to be safe enough. One engineer at Uber later told a journalist that the company had “refused to take responsibility. They blamed it on the homeless lady [Herzberg], the Latina with a criminal record driving the car [Vasquez], even though we all knew Perception [Uber’s software] was broken.”[

The companies who had built the hardware also blamed Uber. The president of Velodyne, who made the car’s lidar, claimed that their sensor would have seen the pedestrian. The company’s president told Bloomberg, ‘Certainly, our lidar is capable of clearly imaging Elaine and her bicycle in this situation. However, our lidar doesn’t make the decision to put on the brakes or get out of her way.’ Volvo made clear that they were not part of the testing. They provided the body of the car, not its brain. An automatic braking system that was built into the Volvo – using well-established technology – would almost certainly have saved Herzberg’s life. But this had been switched off by Uber engineers, who were testing their own technology and didn’t want interference from another system.

We don’t know what Elaine Herzberg was thinking when she set off into the road. Nor do we know exactly what the car was thinking. Machines make decisions differently from humans and these decisions are often inscrutable because of the complexity of machine learning. However, the evidence from the crash points to a reckless approach to the development of a new technology. The company shouldered some of the blame, agreeing an out-of-court settlement with the victim’s family and changing their approach to safety. But to point the finger only at the company would be to ignore the context. Roads are dangerous places, particularly in the US and particularly for pedestrians. A century of decisions by policymakers and carmakers has produced a system that gives power and freedom to drivers. Tempe, part of the sprawling metropolitan area of Phoenix, is car-friendly. The roads are wide and neat and the weather is good. It is an excellent place to test a self-driving car. For a pedestrian, the place and its infrastructure can feel hostile. The official statistics bear this out. In 2017, Arizona was the most dangerous state for pedestrians in the US.

In addition to the climate and the tidiness of its roads, Uber had been attracted to Tempe by the governor of Arizona, Doug Ducey. The company had started their testing in San Francisco, near their headquarters. But when one of their self-driving cars ran a red light, California regulators told Uber that they needed a $150 permit. Uber objected and Ducey seized his opportunity. With the Governor’s blessing, the company had already been testing in secret on the streets of Phoenix. Ducey could now go public and claim that he had tempted a big tech company away from Silicon Valley. He tweeted ‘This is what over-regulation looks like #ditchcalifornia’ and ‘Here in AZ we WELCOME this kind of technology & innovation! #ditchcalifornia #AZmeansBIZ’.

With almost no oversight (the committee Ducey created only met twice and had just one independent), Uber moved their experiments to Arizona in 2016. When Herzberg was killed less than 18 months later, Ducey’s enthusiasm collapsed and Uber were banned from their new laboratory. Members of Herzberg’s family thought that the design of city’s streets and the Governor’s embrace of Uber were causes of her death, and sued the state. Two months after the crash, the Governor of Ohio saw an opportunity, announcing plans to make his state ‘the wild, wild West’ for unregulated self-driving car testing.

This regulatory race to the bottom is unedifying, but the pattern is depressingly familiar. New technologies arrive in society without instructions for how they should be regulated. The rules for liability in self-driving car crashes have not yet been written. Nor have most countries discussed who should be in charge of developing the technology. It is not clear who is driving. And, despite claims that this technology is inevitable, it is unclear where we are going. It is not obvious what a future full of self-driving cars would look like. When accidents happen, it is hard to find the person responsible and easy for those involved to blame others or claim it was a freak occurrence. It is all too common for regulation to be an afterthought. In the world of aviation, it’s called a tombstone mentality: defects are noticed, lessons are learned and rules are written in grim hindsight. For cars, a century’s worth of lessons are known, but systematically ignored. Self-driving cars are an opportunity to do things differently, to govern new technologies so that they are safe, efficient and have widespread benefits. If we leave developers to their own devices, we will be wasting the potential of the technology.

This is an excerpt from ‘Who’s Driving? New technologies and the collaborative state’, to be published in 2019 by Palgrave.

Can members of the public help redirect new technologies?

A new paper in Science from a group of social scientists updates the evidence on public deliberation. The models, including Citizens’ Juries, Citizens’ Assemblies and participatory budgeting processes, have been around for a while. I and Driverless Futures colleagues have been involved in various ways over the years. Some of the things published by Demos, where I used to work, Involve, where I am a trustee, and Nesta, are worth looking at. (See also this episode of Ed Miliband’s Reasons to be Cheerful podcast, featuring Sarah Allan from Involve). For an academic discussion, there’s a paper I wrote with Simon Lock and James Wilsdon in 2014. These experiments in democracy are particularly relevant for science and technology, where the assumption has in the past been that the questions are too difficult for citizens.

There are a few reasons this is getting attention again now. First, the growth of populism. As politicians lean on the ‘will of the people’ to justify their policies, it becomes more important to find out what people really think and give them ways to genuinely influence their own futures. Second, policymakers are struggling to understand and control the power of technology companies. Third, there have been some high-profile examples in which deliberation has helped break policy deadlocks. In Ireland, a Citizens’ Assembly that preceded the referendum on abortion provided an interesting model for how not to screw up democracy.

We have recently been part of the team running the UK’s public dialogue exercise on self-driving cars, the report for which is due out this Summer. We helped 150 citizens in five locations discuss the technology, its implications and options for its governance. Without revealing any spoilers, the clear conclusion was that members of the public can and should help set the direction for new technologies. Watch this space for more…

Self-driving cars, cities and planning – A comment in the journal of Planning Theory and Practice.

A new collection on self-driving cars has been published by the journal of Planning Theory and Practice. The pieces have all ended up bunched together, which is odd. But here is my commentary, which refers to some of the other papers in the collection.

Putting technology in its place

Autonomous vehicles (AVs) are a sort of mirage. They are, we are told, just around the corner. But look closer and they seem impossibly distant. In Phoenix, Arizona, AVs are being tested on citizens’ doorsteps. Some can choose to take part in the experiment. Others may be more passive but, whether they like it or not, their city has become a laboratory. Meanwhile, in Xiong’an, south of Beijing, the Chinese government is planning a new city around AVs. Having bought the claims of companies like Baidu, planners are creating a ‘new economic area’ with coordinated traffic control, less parking and streets that are designed to be easily machine-readable.

The protocols and results of experiments in Phoenix may be only partially relevant to London, Rome or Sydney. The ambition of planners in Xiong’an would be unthinkable in most other parts of the world. And many places would be concerned about the democratic legitimacy of either allowing experiments on open roads or creating large-scale experiments from scratch using public money. But despite the enormous variability of places, cultures and mobilities, there is huge momentum behind the project of making prophecies for AV technology self-fulfilling.

Resisting or redirecting this momentum is hard, but, as this set of papers shows, planners know that technology companies have no monopoly on imagination. In a world of disruption, in which moving fast and breaking things (to use Mark Zuckerberg’s now-dropped slogan) is de rigueur, planning is unfashionable. The cachet that comes with innovation may distract policymakers from asking who is really likely to benefit. It falls to planners to put technologies in their place.

Louise Reardon asks how planners should engage with ‘the transition to AVs’. She sees the momentum pushing this transition, but also the uncertainty that sits beneath the utopian veneer. Uncertainty, she says, is ‘the planner’s nemesis’. One can see why so many planners may be seduced by the apparent certainty of technological promises that are unbent by the real world.

Reardon explains how planners must put the values back into a debate that is too easily depoliticised. Planners are forced to ask who would win and lose and what else the technology might do even if it delivers on its promises. She suggests changing the question, moving from ‘how can we effectively implement AVs in our city?’ to ‘Do AVs fit with the vision we have for our city?’. As a researcher from Science and Technology Studies (STS), I would urge us all to remember that ‘AVs’ are not yet settled. The interpretive flexibility of this technology is an opportunity for planners to get involved in defining and shaping rather than just using this technology. Perhaps the question should be ‘What could AVs look like in our city?’ Technologies and social systems evolve alongside one another. Governing the transition therefore also means governing the technology.

This realisation should be empowering for planners. But, as John Stone and Crystal Legacy point out, their authority has in many places been eaten away by incrementally neoliberal policies. They may feel paralysed by uncertainty and a fear of failure. Stone and Legacy’s recommendation (and James Harris’s in his paper) is to learn from history. The history of the non-autonomous automobile in cities where the technology came first, before planning had begun to assert itself as a discipline or a practice, shows us what is at stake. Their question is, “Can planners today, with all our powers of analysis and the lessons of history, better anticipate the future and help avoid the creation of new problems?” One depressing note of caution from STS would be that policymakers find it all too easy to claim that, this time, it’ll be different.

Jennifer Kent is willing to make a prediction: that AVs will not cut car ownership or car use. Within the clean vision of technologists, two assumptions are that AVs will be shared and that they will complement public transport. Both are questionable. There is compelling social research that tells us, first, that many people don’t like sharing and, second, that in newer cities, private car use is ‘literally cemented into the city structure’.

I agree with Greg Marsden that we should be less fatalistic; we should hold onto the possibility that public-value AV systems could make for better cities. Marsden says that rather than starting from technology, cities should start with questions of purpose. This desire to, as Bruno Latour puts it, ‘reaffirm the sovereignty of ends’ and put means second is laudable but, as Latour goes on to argue, technologies are not just means to ends, solutions to problems or ways to get from A to B. They are better thought of as detours. They create as well as solve problems and they allow for the emergence of unanticipated futures.[1] Marsden finds reasons for both fear and hope. City planners could be forced to acquiesce in the face of technological momentum, but cities are places in which the social is inescapable; technologies cannot claim autonomy for long.

James Harris offers an optimistic alternative. He sees the possibility of public-transport AVs and automated, safe, efficient goods delivery. Well-governed AVs could mean dramatic improvements in liveability. Poorly-governed AVs would lead to yet more sprawl. Getting governance right might demand more honesty. Do we really believe, for example, that AVs should just mix with ordinary traffic or are we likely to see segregated lanes and other trade-offs? Elliot Fishman is similarly optimistic and similarly robust in his critique of the technology-led visions that are currently allowed to dominate. He asks us to remember the absurdity of our current situation, in which gigantically over-engineered cars are privately owned and barely used. The possibilities for technological improvement are clear, but the moves are not obvious ones. If we are not careful, we risk dramatically increasing car use as easier, more accessible, more efficient AVs travel with and without occupants on journeys that would previously have been walked. Fishman suggests that it is time to rethink how we allocate road space and price its usage.

Taken together, these papers provide a strong justification for planners to assert themselves in the debate over our autonomous future, as well as a sense of the challenges involved in doing so. The stories being offered by the developers of AVs are compelling ones. Some cities are already allowing their transport planning to be buffeted by the distant promise of magical technologies. When the promises of self-driving cars meet the material world, it quickly becomes apparent that ‘autonomy’ is a political claim rather than a technological possibility.[2] The ‘autonomous’ vehicle will be just as entangled in urban life as any other city dweller. No car is an island. It takes planners to demonstrate this.

[1] Latour, B (2002) Morality and Technology: The End of the Means, Theory, Culture & Society 2002, Vol. 19(5/6): 247–260

[2] Stilgoe, J (2018) Seeing Like a Tesla: How can we anticipate self-driving worlds, Glocalism Issue 2017, 3

From deficit to dialogue – A comment on PAVE

As the Tech world gathered in Las Vegas for CES last week, a coalition of self-driving players launched a new initiative – Partners for Automate Vehicle Education. According to its site, PAVE comprises ‘disability advocacy groups and safety groups… traditional automakers from the United States and around the world, auto component makers, startup technology companies, established tech firms [and] insurance firms.’ They want to tell people ’the facts’ about self-driving cars, get consumers using them and tell policymakers about the possibilities of this new tech.

PAVE say that their goal is “purely educational: the coalition does not advocate for a particular technology.” It is clear, however, that the motive is not completely pure. They are clearly advocating for self-driving car technologies, and should admit as much. A pretence at neutrality won’t survive a test of public credulity for long. PR is PR.

The more important question, I think, is whether this campaign has identified the right problem. In the 1980s, scientists and innovators recognised a growing public scepticism. Their prescription was one of education. In the UK, we saw institutionalised programmes to improve the ‘public understanding of science’. These programmes were based on what Brian Wynne later called the ‘Deficit Model’. The problem was seen as one of public ignorance, which could be corrected with more public information. It was the wrong diagnosis. As controversies grew around new technologies like genetically modified crops, it quickly became clear that the problem was not a deficient public, but institutions of science and innovation that didn’t really understand what the public thought. Members of the public had a range of legitimate questions about the technology and these questions weren’t being listened too. The things that scientists and the biotech industry had decided were ‘the facts’ about GM crops were not the only facts that members of the public were interested in.

Self-driving cars promise to be transformational. The people currently in the driving seat don’t have all the answers. Nor do they know what all of the relevant questions are. If I were PAVE, I would be looking not to educate but to listen. There is an urgent need for dialogue. If experience with previous emerging technologies is anything to go by, PAVE’s current approach risks patronising and alienating the public that it wants to persuade.

On the road

The Driverless Futures? project officially began on the first day of 2019. The research has begun and we are all looking forward to getting out and talking to the people who are, or should be, thinking about machine learning, transport and the governance of self-driving cars. We are planning a launch event for the spring and we will work out a way that people can keep in touch with the project. In the mean time, any questions or comments can come through here.