A conference that was due to be in Los Angeles, but took place on Zoom, natch. This was part of a series of meetings, funded by the US National Science Foundation, on Mathematical Challenges and Opportunities for Autonomous Vehicles. My talk was on the ways in which so-called ‘autonomous’ vehicles are inextricably attached to various social and technical infrastructures.
As part of a global citizens’ debate on driverless mobility coordinated by Missions Publiques, we worked with Involve, the public participation think tank, and Transport for Greater Manchester to run a day of public dialogue. The report has just been published by Involve.
Issues in Science and Technology invited me to respond to a recent piece on AV safety assurance by Marjory Blumenthal and Laura Fraade-Blanar. I’ve pasted my letter here. It’s also worth reading the response from the excellent Professor Missy Cummings.
Theres an old Irish joke in which a man stops to ask for directions and is told, ‘Well, I wouldn’t start from here’. When looking for guidance on governing new technologies, our starting points are often the problem. It is hard to escape our existing technological systems and our existing frames of reference. This is the case with automated vehicles. AVs promise transformative benefits, in particular for road safety. But they also bring a critical safety question of their own: how safe does an AV have to be to be safe enough?
In Can Automated Vehicles Prove Themselves to Be Safe? (Issues, Summer 2020), Marjory S. Blumenthal and Laura Fraade-Blanar are right to argue for a concerted approach to AV safety, and they are right that public trust will be crucial. But trust cannot be bought; it has to be earned. The process of standard-setting must be an inclusive one. Safety is too important to be left to technology developers alone. The test will not be whether AV developers can prove to themselves that their technology is safer than conventional driving. The question How safe is safe enough? is one for society at large, and it is profoundly uncertain, in part because it is connected to an even more complicated question: Safe enough for what?
The comparison with conventional cars is a poor starting point. The risks of cars are, as the novelist J. G. Ballard once put it, a ‘pandemic cataclysm’. More than a million deaths a year is a huge price, even if the benefits of the technology are clear. Developers of AVs may be aiming to clear the extremely low bar of overall road safety, but citizens will have other ideas. Aggregate improvements in safety, even if they are substantial, will do little to reassure parents when a child is killed by an AV and it is not clear who is responsible.
Everything we know about public risk perception tells us that people will evaluate the risks of new and potentially inequitable systems very differently from the risks of driving. In a car, we kid ourselves that we are in control of our destiny, which is one reason why we are willing to accept, according to some measures, more than 100 times greater risk than on an airplane or a train. It is not at all obvious that, as Blumenthal and Fraade-Blanar suppose, people’s perceptions of AV risk will be ‘grounded… in peoples experience with the safety of traditional automobiles’. What if people think of AVs like trains and conclude that each death marks a new catastrophe? The authors’ analogy may work in one important sense: with AVs, we may well see extreme distrust of attempts by powerful industries to govern themselves in the public interest.
Being trapped in an automotive frame of reference has other implications for how we think about AVs. Much of the excitement surrounding AVs comes from artificial intelligence. The story is that AI will be able to mimic and then surpass human capability without getting drunk, distracted, or sleepy. However, where we have seen automated systems deliver real benefits for transport, it is not thanks to super-intelligence, but rather because of a well-understood, well-defined system. Blumenthal and Fraade-Blanar talk about the gradual expansion of an AV’s operational design domain (ODD) as the technology becomes more sophisticated. My concern is that in many cases an ODD will be constrained to suit the AV, rather than the AV improved to match an ODD. In the name of safety, AVs may be given their own lanes, cyclists may be shepherded away from roads, and pedestrians may be asked to behave more predictably.
If we do not have an open, democratic debate about AV safety, we risk getting pushed around by the hype of the industry.
Science and Technology Studies
University College London
I was asked to do the keynote talk at the Shift Mobility 2020 conference in Berlin on 3rd September. Via Zoom, natch. I spoke about the seductive myth of autonomous vehicles and how it could lead to bad policy decisions. Here’s a loose version of what I said.
A just-so story
In 2007, at a disused Air Force base in California, teams of researchers from around the US came together to compete in the third DARPA Grand Challenge. Two earlier competitions had taken place in the desert, with the challenge being to get a robot car to find its way along a fixed course. This time, the challenge was to navigate an ersatz town, with junctions, other competitors’ cars and actual human-driven cars to contend with. There were a few bumps and scrapes, but six teams completed the three ‘missions’, travelling 55 miles without drivers. According to one enthusiastic account, which I reviewed here, this was “the moment… when everything changed”. The world began paying attention. Self-driving cars switched from impossible to inevitable. The competition’s team members went on to populate the tech companies who would funnel billions of dollars towards a race to develop the tech.
The promise that a truly self-driving car was just around the corner was based on an observation that artificial intelligence was advancing exponentially. Following a high-profile 1997 victory of a computer over the world’s greatest chess player in, headlines were now announcing that humans were being superseded in more complicated games like Go, as well as tasks like translation, voice recognition, translation and medical diagnostics. Huge increases in computer power and available data allowed machine learning to take off. At CES 2018, the CEO of Nvidia, a chipmaker, announced that AI would soon solve self-driving.
For tech people, this was a fascinating test case for machine learning in the wild. And the social justifications seemed clear. Humans are unreliable drivers. They get drunk, distracted and old. Computers mean reliability, safety and efficiency. Consultants crunched the numbers and concluded that self-driving cars would enable an 80% reduction in the number of cars, a repurposing of the space currently devoted to parking, hundreds of thousands of lived saved and trillions of dollars of economic benefits.
YouTube is replete with videos of self-driving cars in action. This one (below) is from Tesla. It shows something remarkable: an artificial intelligence sensing and classifying objects in real time, predicting their future movements and planning a safe path through them, in bad weather, on complicated streets, with pedestrians, cyclists and other hazards. Tesla’s sensors aren’t particularly complicated. Their argument is that if human eyes are good enough then video cameras will do the job just fine. The system’s power comes from its ability to learn from data gained from sensors across its whole fleet. When one Tesla learns something about the world, they all learn it.
This story is of a plug-and-play technology, learning about and adapting to the world in all its complexity. It will change the world without needing to change the world. The story is exciting and seductive. It is also, crucially, not true.
The real history of the technology is longer and more complicated than the simple story suggests. It is a history not just of smart cars, but of smart roads. In 1956, General Motors imagined a self-driving future in its Motorama exhibit. Notwithstanding the rather fixed social roles in its utopia – men up front smoking cigars; ladies in the back – this automotive dream comes from an age before the US had given up on infrastructure. The system on offer involves “high speed safety lanes” that would allow drivers to go hands-free and serve ice cream from their glove compartment.
General Motors, an old-school carmaker, still knows that for its technology to work, the world needs to meet it halfway.
An ‘autonomous vehicles’ is far from autonomous. For the technology to work, it needs to be embedded in the social and technical world – its physical and digital infrastructures, its legal rules and its social norms and practices. These things differ from place to place, making a universal technology impossible. Nor is the technology inevitable. Its development is driven by powerful commercial interests, which may not align with the public good. We can imagine that, if the technology’s claims go unquestioned, there will be pressure to change the world to suit self-driving cars.
Take the Roomba, a robot vacuum cleaner that has quickly become a part of many homes in the affluent world. Social scientists studying how people use the Roomba have found that it is a not a simple matter of buying a robot to solve their dirt problem. It takes work to make rooms navigable and machine-readable for the Roomba. Users had to adapt their lives so that the Roomba could do its job. Some of the details are interesting:
‘One of our participants told us that she threw away her rug in the living room because her Roomba kept “getting frustrated” with the length of the shag, getting it caught in its brushes. Another participant taped down the entire tassel on the carpet every time he ran the robot. Also, we had a participant who replaced the old refrigerator with a new one that had enough space underneath for Roomba.’From Sung, Ja-Young, et al. ““My Roomba is Rambo”: intimate home appliances.” International conference on ubiquitous computing, 2007.
What does this mean for self-driving cars? The question is not when the technology will arrive, but where, for whom and in what form? In places where self-driving cars are being tested, streets are being mapped in exquisite detail and kitted out with smart traffic lights. Places are being chosen for their weather, the tidiness of their road junctions, the predictability of their pedestrians and the affluence of their potential consumers. The transition towards a self-driving future will be patchy and uneven.
Upgrading our mobility
When motor cars began arriving in US cities more than a century ago, streets were messy places in which multiple modes of transport interacted. The historian Peter Norton describes how, in the name of efficiency and safety, streets were reorganised to suit the car. The motor lobby fought hard and pedestrians lost out.
Many cities are now trying to extricate themselves from a dependence on cars that has been built into their architectures, their economies and their cultures. If we are to realise the advantages of self-driving cars without repeating the mistakes made with their predecessors, we must not sleepwalk into the technology. It is not clear what the right approach is: in the US the autonomous ideal has taken hold while in China there is a more infrastructure-first approach. But rather than starting with an imaginary technology, we should start with people’s mobility needs.
An edited version of this review was published in the Guardian, 16 July 2020
Maurice Gatsonides was one of the world’s first professional rally drivers. He won the 1953 Monte Carlo rally, but made his fortune and his name with an invention that would torment other motorists. The Gatso was the first speed camera. Using a combination of flash photography, radar and road sensors, the machine was able to enforce speed limits more effectively than any traffic cop. Gatsonides himself lamented: “I am often caught by my own speed cameras and find hefty fines on my doormat. Even I can’t escape my own invention because I love speeding.” Ever since there have been speed limits, there have been attempts to evade them. The Automobile Association was created in 1905 specifically to warn motorists about speed traps. Speed cameras make our roads safer, and when we are caught it is almost always a fair cop, but they generate intense anger. During the gilets jaunes protests that began in 2018, more than 60% of France’s speed cameras were vandalised.
Matthew Crawford would have huge respect for Gatsonides, while despising his invention. Bruno Latour wrote that “no human is as relentlessly moral as a machine”. A speed camera brooks no argument, and Crawford loves to argue. In Why We Drive, he recalls being clocked doing 86mph on his motorbike in a 55 zone and explaining to the traffic cop how the laws of physics should trump arbitrary speed limits. He is delighted when the case reaches court and he can go full Atticus Finch. He imagines the judge’s delight at his presence: “What they rarely get in traffic court is an argument, or an attempt at rhetoric – the stuff that presumably made them want to go to law school.” He then declares, as though it was someone else’s fault: “A couple of months later, it happened again.”
Crawford’s project is to build a conservative philosophy of technology to meet what he calls “the challenge of remaining human against technologies that tend to enervate, and claim cultural authority in doing so”. Why We Drive follows his bestselling Shop Class as Soulcraft (published in the UK as The Case for Working with Your Hands) and The World Beyond Your Head in making the case for freedom in a technological society. This time, he takes aim at the new digital technologies that threaten his relationship with the old technologies he loves: cars and motorbikes. The book contains some terrific moments – an explanation of how human and machine contrived to crash two Boeing 737s; a description of the suck-squeeze-bang-blow four-stroke that happens a billion times in an engine’s life – but its big and important argument is muddied by the author’s prejudices.
The book’s middle is like being taken for a drive by a hyperactive, unreliable uncle. You’re going a way you don’t recognise and you’ve been told not to put on your seatbelt because seatbelts are for squares. As you cling to the sides of your seat, he takes you past a stock car rally, then an off-road race course in the desert. Just as you’re approaching something familiar, there’s a handbrake turn and off you go down a detour on the Nazis, a bit on utilitarianism, then a lesson on how to drift. He rolls down the window to shout something at some “bicycle moralists”. There is a merciless digression about rebuilding a VW Beetle and some of the author’s drawings of gears. His justification for all of this is what he calls “philosophical anthropology”, but its real purpose is to tell you what he approves of – mechanics, motorsports, motorbikes, the autobahn, real men and old-fashioned women – and what he doesn’t like: bicycles, the pencil pushers at the DMV, “safetyism” and the town of Portland, Oregon.
He is selective in his nostalgia and romanticises bits of the present that other freedom-loving individuals would object to. In his view, it’s fine to destroy speed cameras, but it’s not fine for grown-ups to ride bicycles. His description of the traffic in Rome having an “improvisation and flow that is beautiful to behold” suggests he has never tried to cross the road there. He admires London’s taxi drivers for their Knowledge and their enthusiasm for Brexit (see the subtitle, “take back control”) and mourns their slow death by GPS. (Just wait until the Brexiteers find out about the EU laws, which Britain has signed up to, mandating ‘intelligent speed assistance’ in new cars from 2022).
We do not have to see cabbies as heroes to critique Uber’s data-hoarding gig economy. Nor do we have to admire rule-breaking in general to see that some of the devices that constrain our driving are counterproductive. The “shared space” movement in Europe has been making the case for decades that roads need more uncertainty, not more rules, in order to slow down cars. Crawford would appreciate their motto – “unsafe is safe” – but he isn’t much interested in shared space. He wants his own space. If, as he argues, “the road is a place of mutual trust”, what should we think of a motorcyclist doing 86 through a 55mph speed limit?
On the road, individual freedoms do not add up to the public good. Traffic is a collective action problem, and individual risk assessments are notoriously unreliable. Crawford sees driving as a “skilled human activity” and would like more drivers to become as skilled and confident as he is. But “expert” drivers – the sort who palm the steering wheel as they reverse – are part of the problem, not least because they despise other drivers. A survey conducted by a Swedish researcher found that, of his compatriots, who are among the world’s safest drivers, 69% thought they were better than average drivers. In the US, where road death rates are more than four times Sweden’s, the figure was 93%.
The benefits of cars come with an extraordinary potential to do harm. We can be legitimately horrified that more than a million people a year die on the world’s roads, while being also surprised that most of us, most of the time, do not get into danger. Crawford hates what he calls the “safety industrial complex” that has taken all the joy out of driving. Here he is at odds with historians and activists such as Ralph Nader who have catalogued the US car industry’s opposition to regulations and safety innovations that aim to protect drivers from their own delusions.
Cars, for Crawford, are liberty: “The gas pedal and the steering wheel are wired directly to your will, via the seat of your pants, and there is no committee involved.” But it takes a lot of staging to maintain the performance of freedom. Driving, even in rural America, requires a byzantine arrangement of infrastructures and rules, in addition to the common sense that Crawford admires. The German philosopher Max Horkheimer wrote as long ago as 1947 that “It is as if the innumerable laws, regulations and directions with which we must comply were driving the car, not we.”
‘The gas pedal and the steering wheel are wired directly to your will, via the seat of your pants’ … Matthew Crawford Photograph: Colin McConnell/Toronto Star/Getty Images
Many of us do not share Crawford’s need for speed. We want our cars to be safe and we are slaves to, rather than masters of, this technology. Cars enable freedom of movement but they also homogenise our lives. So many of our lives and places are structured around the car and we collectively find it hard to build the alternative forms of mobility that would allow us to escape traffic, pollution and danger. Are self-driving cars the answer?
Crawford is understandably worried that the same tech companies that have taken over navigation, devaluing the cab drivers’ Knowledge overnight, are now coming for the rest of the world’s drivers. It’s not that simple. Having generated huge hype, self-driving evangelists are now starting to admit that their challenge is harder than they first made it seem. Teaching a computer to drive is not like teaching a computer to win at chess. However, there is a risk that, as with the car a hundred years ago, the rules of the road will eventually be changed to suit a new technology and, in doing so, will impede other ways of getting around. Freeing us from driving could require enormous digital infrastructures and monopolistic control. The people selling “autonomous vehicles” should heed, and be worried by, Crawford’s argument for autonomous people.
Car culture, as with many traditions invoked by conservatives, is a relatively recent invention, propped up by powerful industrial interests. Arguments against the claims of new technologies need not be as reactionary as Crawford’s. In the space between a souped-up vintage Beetle and a speculative self-driving Uber, we can imagine a range of progressive possibilities. Nostalgia may not be a good guide.
Warren Buffet once said that it’s only when the tide goes out that you see who’s swimming naked. It’s a phrase that is being repeated a lot at the moment, despite it being nonsensical: who swims until the tide goes out around them? Regardless, the gist is that a company that may be able to survive while the economy is rising will have its flaws revealed when a recession hits. Self-driving car companies have benefited from huge investments in the last five years or so. Covid and its accompanying recession are likely to lead to bankruptcies, consolidations and corporate fire sales. However, I wonder if Buffet’s maxim is holding. Maybe the companies that survive will not be those with the best swimwear, but those who are most self-confident, regardless of their nudity. (Sorry. The analogy may have already snapped). This could be bad for innovation and bad for local transport.
We have already seen the demise of Starsky Robotics, a company that was trying to make self-driving technology safe and effective in a well-defined context: trucks on big roads. Other companies are now feeling the heat. A recent Venturebeat piece describes some of the troubles faced by May Mobility, whose approach to AV innovation I have admired for a while.
A year ago, I took a train from Boston to Providence, Rhode Island to ride in one of May Mobility’s prototype self-driving shuttles. ‘Little Roady’ was a year-long collaboration with the Rhode Island Department of Transportation to provide a free shuttle running a five-mile, ten-stop lap of the town. The body of the vehicle was a sort of low-speed, electric stretch golf cart. The brain and the sensors were added by May Mobility with the hope of eventually enabling driverless transport.
The publicity describes this as an autonomous vehicle, with the person in the driving seat there to monitor things, talk to passengers and take over if needs be. The reality is more complicated. The two shuttles I tried – one out, one back – revealed plenty about the possibilities and the limits of self-driving technology.
Station to station
The shuttle stop is right outside the train station. I only have to wait a couple of minutes. The first shuttle is far from autonomous. Our driver stays in manual mode, steering the thing with its handlebars. She seems uninterested in pushing the envelope and is keen to talk about what the vehicle can’t yet do. It can’t turn across traffic and it can’t enter a road at a junction without traffic lights. A few hundred yards away from the train station, we have to stop because there’s a street parade. Self-driving car engineers call such things ‘edge cases’ – new experiences, unaccounted for in their model.
I hop off near Olneyville Square, get some lunch and take a wander. I check the web site to see when the next shuttle is on its way.
Shuttle two is a bit closer to the technological sublime. This driver lets me ride up front with him. He puts the car in ‘auto’ and shows me the little green ‘A’ lighting up on his screen. He says the aim is to keep it in auto as much as possible, to train as well as test the machine. The company is hungry for data, not least to show the Rhode Island department of transportation how well they’re doing.
When we come to a four-way stop junction, he doesn’t have to take over. He tells me that the sensors can see the other cars’ indicators. The computer identifies a gap and the vehicle takes its opportunity with aplomb. He admits that the system finds things easier in light Saturday traffic. Maybe the four-way stop would have been trickier in rush hour.
He says the system is getting better all the time. It used to overreact when someone was tailgating or be disabled when a road sign was partly obscured by a tree branch. He says it still isn’t great with bicycles. The last bit of the loop was covered in roadworks, so the driver switches to manual before we get there.
Just as we approach the train station, which marks the end of the loop, a picture of red traffic light appears on screen. I ask whether the vehicle sees the colour or detects it in some other way. The driver points to a box on a lamppost that transmits the signal from the light to the vehicle. ‘Autonomous’ vehicles, we should remember, are never completely autonomous.
Learning by doing
The Little Roady pilot has been cut short by Covid. This is a shame, as it seemed to be a genuine public experiment. In many cases, ‘trials’ of self-driving vehicles are not real tests of the technology; they are tests of public acceptance or just public displays of a technology. If failure is not an option, then little can be learnt. Other companies have tightly choreographed their ‘trials’, protecting them from technical and social complexity with safety marshals for pedestrians or non-disclosure agreements for passengers. (Noortje Marres and Declan Mcdowell-Naylor have been studying the sociology of self-driving car trials).
May Mobility were trying something different. They were trying to see if they could make an early technology work in the real world, for a real use case, with real people. No good deed, it seems, goes unpunished. The company found that some of its most frustrating glitches were with old rather than new tech. In hot weather the shuttles’ air conditioning had problems and in cold weather their batteries failed. Venturebeat criticised May for targetting “fixed-route transportation needs in geofenced, easily mappable business districts, campuses, and closed residential communities.” The criticism that “not a single one of the company’s commercial routes approached full autonomy” prompts us to ask what would count as “full autonomy”. May was attempting to serve two masters: a local transportation service with a set of requirements about air conditioning and a Wizard of Oz public discourse about autonomous vehicles. At some point, other companies will face similar reckonings. For now, however, I worry that companies will take the wrong lesson from May Mobility and keep telling grand stories rather than trying modest experiments.
I was once in a band called ‘It’s a Race!’ The daft name seemed to fit. It captured the carefree pointlessness of our music. We were guilty of some of the worst jam-band excesses. Our improvised, 10-minute songs could not cover for our lack of rehearsal or musical mediocrity. We didn’t know where we were going. To say we split up due to artistic differences is to give us too much credit.
A lot of self-driving car journalism uses the metaphor of a ‘race’. In a recent episode of the Autonocast, Ed Niedermeyer (a recent and welcome addition to Partners for Automated Vehicle Education) calls it horse-race journalism. He is rightly critical of this style. There is no point talking about a race if we don’t know where the finish line is.
Niedermeyer’s target is a recent Bloomberg piece on the “The State of the Self-Driving Car Race 2020”. This gets off to a bad start, with an analogy to the space race and a quote from a consultant:
I mean, it’s literally like putting somebody on the moon. It’s that complex
It literally isn’t like putting somebody on the moon. Putting somebody on the moon, ironically, was not rocket science. That problem was hard, but it was not complex. According to Brenda Zimmerman and colleagues, some problems are simple, like following a recipe; some are complicated, like rocketry; others, like raising a child, are complex. The easy things about putting a person on the moon were that most people agreed on what success would look like and that there was nothing in the way. The hard thing – how to take an object containing a person such a vast distance and bring it back – could be solved with enough brains and enough money. Many terrestrial problems, including climate change, obesity and mobility, are wicked. Their is little agreement on the definition of problems, approaches or metrics for success and there are lots of organisations, interests and structures standing in the way.
For systems comprising self-driving cars to work effectively, safely and fairly in a range of different places, much of what needs to happen lies beyond the control of the competitors in the self-driving car ‘race’. This is one reason why the technology’s success will take far longer than the hype currently suggests.
The trouble with the Bloomberg piece, as with so much self-driving car journalism, is that it presumes the finish line is clear. If we are to make good decisions about this technology, we must first recognise that there are many different possible directions. We need to start talking about the desirable directions and working out how to bend current approaches to fit.
On April 8th 2020, ICTC (the Canadian Information and Communications Technology Council) spoke with Dr. Jack Stilgoe, Senior Lecturer in the Department of Science & Technology Studies at University College London, where he researches and teaches the governance of emerging technologies. Dr. Stilgoe is the Principal Investigator of the Driverless Futures? Project, a three-year social science project looking at the governance of self-driving cars.