Who killed Elaine Herzberg? One year on from the Uber crash

Today marks the anniversary of the crash in Tempe, Arizona, in which Elaine Herzberg became the first person to be killed by a self-driving car. (An excellent piece published yesterday at AZ Central updates the story).

Here is an excerpt from my forthcoming book about the need for governments to regain some control of new technologies. (A report from the National Transportation Safety Board, due this summer, will provide more details on the crash and valuable lessons for policymakers. I admit the recent Boeing 737 crash, which the NTSB is now investigating alongside the Uber one, makes my point about air safety seem already off-kilter).

NTSB schematic showing the point when Uber’s software determined emergency braking was needed to mitigate collision (from Flickr)

Who killed Elaine Herzberg? A prologue

Elaine Herzberg did not know that she was part of an experiment. She was pushing a bicycle, carrying heavy bags and moving slowly. It was 10pm on a dark desert night in Tempe, Arizona. She had crossed three lanes of a four-lane road before she was hit.

Herzberg was run down by a Volvo SUV travelling at 38 miles per hour and pronounced dead at 10:30pm. The next day, the officer in charge of the investigation rushed to blame the pedestrian. Police Chief Sylvia Moir told a local newspaper, “It’s very clear it would have been difficult to avoid this collision… she came from the shadows right into the roadway… the driver said it was like a flash.” According to the rules of the road, Herzberg should not have been there. Had she been at the crosswalk just down the road, things would probably have turned out differently.

Rafaela Vasquez was behind the wheel, but she wasn’t driving. The car, operated by Uber, was in autonomous mode. Vasquez’s job was to monitor the computer that was doing the driving and take over if anything went wrong. A few days after the crash, the police released a video from a camera on the rear-view mirror. It showed Vasquez looking down at her knees in the seconds before the crash and for almost a third of the 21-minute journey that led up to it. Data taken from her phone suggested that she had been watching an episode of ‘The Voice’ rather than the road. Embarrassingly for the police chief, her colleagues’ investigation calculated that, had Vasquez been looking at the road, she would have been able to stop more than 40 feet before impact.

Drivers and pedestrians make mistakes all the time. Official statistics attribute more than 90% of crashes to human error. The Tempe Police report concluded that the crash had been caused by human frailties on both sides: Herzberg should not have been in the road; Vasquez for her part should have seen the pedestrian, she should have taken control of the car and she should have been paying attention to her job. These factors are ‘proximate causes’, but if we focus on these we fail to learn from the novelty of the situation. Herzberg was the first pedestrian to be killed by a self-driving car. The Uber crash was not just a case of human error. It was also a failure of technology.

Here was a car on a public road in which the driving had been delegated to a computer. A thing that had very recently seemed impossible had become, on the streets of Arizona, mundane, so mundane that the person who was supposed to be checking the system had, in effect, switched off. The car’s sensors – 360-degree radar, short- and long-range cameras, a lidar laser scanner on the roof and a GPS system – were supposed to provide superhuman awareness of the surroundings. The car’s software was designed to interpret this information based on thousands of hours of similar experiences, drawing on vast quantities of data to identify objects, predict what they were going to do next and plot a safe route. This was artificial intelligence in the wild: not playing chess or translating text but steering two tonnes of metal.

When high-profile transport disasters happen in the US, the National Transportation Safety Board is called in. This organisation is less interested in blame than in how to learn from mistakes to make things safer. This laser focus on learning from disaster is one reason why air travel is so astonishingly safe. In 2017, for the first time, a whole year passed in which not a single person died in a commercial passenger jet crash. If self-driving cars are going to be as safe as aeroplanes, we should pay close attention to the NTSB. Their initial report on the Uber crash concluded that the car’s sensors had detected an object in the road six seconds before the crash. The software classified Herzberg “as an unknown object, as a vehicle, and then as a bicycle”, in the NTSB’s words, but the car continued. A second before the car hit Herzberg, the driver took the wheel but swerved only slightly. Vasquez only hit the brakes after the crash.

As well as the proximate causes, Elaine Herzberg’s death was the result of a set of more distant choices about technology and how it should be developed. Claiming that they were in a race against other manufacturers, Uber chose to test their system quickly and cheaply. Other self-driving car companies put two or more qualified engineers in each of their test vehicles. Vasquez was alone and she was no test pilot. The only qualification she needed before starting work was a driving licence.

Uber’s strategy filtered all the way down into its cars’ software, which was much less intelligent than the company’s hype had implied. As the company’s engineers worked out how to make sense of the information coming from the car’s sensors, they had to balance the risk of a false positive (detecting a thing that isn’t really there) against the risk of a false negative (failing to react to an object that turns out to be dangerous). After earlier tests of self-driving cars in which software overreacted to things like steam, plastic bags and shadows on the roads, engineers began to tune their systems in favour of false positives. The misidentification of Elaine Herzberg was partly the result of a conscious choice about how safe the technology needed to be in order to be safe enough. One engineer at Uber later told a journalist that the company had “refused to take responsibility. They blamed it on the homeless lady [Herzberg], the Latina with a criminal record driving the car [Vasquez], even though we all knew Perception [Uber’s software] was broken.”[

The companies who had built the hardware also blamed Uber. The president of Velodyne, who made the car’s lidar, claimed that their sensor would have seen the pedestrian. The company’s president told Bloomberg, ‘Certainly, our lidar is capable of clearly imaging Elaine and her bicycle in this situation. However, our lidar doesn’t make the decision to put on the brakes or get out of her way.’ Volvo made clear that they were not part of the testing. They provided the body of the car, not its brain. An automatic braking system that was built into the Volvo – using well-established technology – would almost certainly have saved Herzberg’s life. But this had been switched off by Uber engineers, who were testing their own technology and didn’t want interference from another system.

We don’t know what Elaine Herzberg was thinking when she set off into the road. Nor do we know exactly what the car was thinking. Machines make decisions differently from humans and these decisions are often inscrutable because of the complexity of machine learning. However, the evidence from the crash points to a reckless approach to the development of a new technology. The company shouldered some of the blame, agreeing an out-of-court settlement with the victim’s family and changing their approach to safety. But to point the finger only at the company would be to ignore the context. Roads are dangerous places, particularly in the US and particularly for pedestrians. A century of decisions by policymakers and carmakers has produced a system that gives power and freedom to drivers. Tempe, part of the sprawling metropolitan area of Phoenix, is car-friendly. The roads are wide and neat and the weather is good. It is an excellent place to test a self-driving car. For a pedestrian, the place and its infrastructure can feel hostile. The official statistics bear this out. In 2017, Arizona was the most dangerous state for pedestrians in the US.

In addition to the climate and the tidiness of its roads, Uber had been attracted to Tempe by the governor of Arizona, Doug Ducey. The company had started their testing in San Francisco, near their headquarters. But when one of their self-driving cars ran a red light, California regulators told Uber that they needed a $150 permit. Uber objected and Ducey seized his opportunity. With the Governor’s blessing, the company had already been testing in secret on the streets of Phoenix. Ducey could now go public and claim that he had tempted a big tech company away from Silicon Valley. He tweeted ‘This is what over-regulation looks like #ditchcalifornia’ and ‘Here in AZ we WELCOME this kind of technology & innovation! #ditchcalifornia #AZmeansBIZ’.

With almost no oversight (the committee Ducey created only met twice and had just one independent), Uber moved their experiments to Arizona in 2016. When Herzberg was killed less than 18 months later, Ducey’s enthusiasm collapsed and Uber were banned from their new laboratory. Members of Herzberg’s family thought that the design of city’s streets and the Governor’s embrace of Uber were causes of her death, and sued the state. Two months after the crash, the Governor of Ohio saw an opportunity, announcing plans to make his state ‘the wild, wild West’ for unregulated self-driving car testing.

This regulatory race to the bottom is unedifying, but the pattern is depressingly familiar. New technologies arrive in society without instructions for how they should be regulated. The rules for liability in self-driving car crashes have not yet been written. Nor have most countries discussed who should be in charge of developing the technology. It is not clear who is driving. And, despite claims that this technology is inevitable, it is unclear where we are going. It is not obvious what a future full of self-driving cars would look like. When accidents happen, it is hard to find the person responsible and easy for those involved to blame others or claim it was a freak occurrence. It is all too common for regulation to be an afterthought. In the world of aviation, it’s called a tombstone mentality: defects are noticed, lessons are learned and rules are written in grim hindsight. For cars, a century’s worth of lessons are known, but systematically ignored. Self-driving cars are an opportunity to do things differently, to govern new technologies so that they are safe, efficient and have widespread benefits. If we leave developers to their own devices, we will be wasting the potential of the technology.

This is an excerpt from ‘Who’s Driving? New technologies and the collaborative state’, to be published in 2019 by Palgrave.

One thought on “Who killed Elaine Herzberg? One year on from the Uber crash

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s