Confused of Bloomsbury

I recently attended a talk given by Edmond Awad, one of the team that carried out the research project entitled Moral Machine, amongst other things.  Run from MIT, Moral Machine is an online survey that presents respondents with a series of randomly generated dilemmas based on the so-called trolley problem.  It sets up a series of scenarios in which an automated vehicle is on a collision course and the respondent is asked to choose between two courses of action that will have differing effects on both occupants of the vehicle and those crossing its path.  In addition to being told the number of individuals at risk of fatal injury, respondents are also presented with information about their characteristics, ranging from age and gender to species (human, cat, dog), status, level of fitness and, even, housing status.

The survey has attracted a very large number of responses, with a paper in Nature reporting on the first 40 million “dilemma responses”.

This member of the audience found the whole thing very intriguing and came away thinking that trolley problems remain the question after the question.  That is, before we ask whether an automated system should prioritise on the basis of age or gender, say, we should be asking whether automated systems should be allowed to prioritise at all.  But the trolley problem has demonstrated an enduring appeal to professional and armchair philosophers alike for many years and we should not deny them their fun.

I got confused, though, after I asked Dr Awad whether any respondents had pointed out that no automated system would ever be able reliably to identify whether Person A was homeless (this being one of the attributes tested by Moral Machine).  He replied that this wasn’t the issue: what his experiment was testing was people’s values and, in effect, concerned itself with the aftermath of any crash (whether the “right” individuals had died) than the algorithm the automated system might employ.  Confused of Bloomsbury responded that the homeless/not homeless attribute in the survey therefore wasn’t included to test people’s attitudes to what automated vehicles might do.  And Dr Awad replied that none of this was: no one responding to the survey actually expected automated vehicles to do as depicted in the scenarios.  (I hope I represent his words fairly.)

If my understanding of what he said is correct, I have two comments.  The first is that there is a world of difference between something not being possible now and not being possible ever.  I suspect that most, if not all, of those who responded to the survey did so on the basis that an automated vehicle might one day be designed such that at least some of their preferences could be enacted.  So I’m very dubious about seeing the entire experiment as speculative.  If it was, why use automated vehicles at all?  Why not use the conventional out-of-control trolley, if all that is sought is an understanding of people’s preferences concerning who should die in such scenarios and of their attitudes to acts and omissions?  But I can’t believe that the AV element is nothing more than a MacGuffin: members of the research team have done other work on AVs and are clearly interested in ethical questions relating to this technology.

So, second, if I’m right that the researchers are asking about AVs because they are interested in AVs, the fact that the scenarios used in Moral Machine are manifestly beyond what artificial intelligence is likely ever to be able to do makes their inclusion at least frivolous and, possibly, irresponsible.  Automated vehicles are the subject of intense discussion at present precisely because a variety of actors are hard at work bringing them onto our transport networks, with the high probability that humans (homeless or otherwise), dogs and cats will be involved in collisions.  I think it’s therefore incumbent on any researcher purportedly doing serious research on the subject to stay within the bounds of what is reasonable.  To give the team the benefit of the doubt, I suppose the research could meaningfully illuminate the debate if we were envisaging a future world in which humans are tagged with their attributes, including their housing situation, thus relieving the vehicle of having to discern such details.  But that doesn’t even bear thinking about.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s