MIT project uncovers major moral choices facing self-driving cars of the future

A Google self-driving car, unrelated to MIT’s Moral Machine website .
Image: AP Photo/ Tony Avelar

As self-driving vehicles slowly make their style onto our roads, how will developers help these autonomous vehicles attain difficult decisions during accidents? A new MIT project represents just how difficult this is likely to be by mixing gaming with deep moral questions.

The Moral Machine presents you with a series of traffic scenarios in which a self-driving car must make a choice between two perilous options.

Should you avoid making a group of five jaywalking pedestrians by reaching a concrete divider that will kill two of your passengers? If there’s no other choice, do you drive into a group of young pedestrians, or elderly pedestrians? Do you veer to avoid a group of cute cats and dogs, or make a doctor, a man and an executive? With merely two choices, do you hit a large group of homeless people heeding traffic statutes or a small child jaywalking against the traffic light?

These are the kinds of split second, moral selections self-driving vehicles will unavoidably be forced to construct, and MIT’s experimental site exposes just how dark things could get. The game includes everything from pregnant women to small children, and at the end of the game you get to see how your options stack up against others who have played the game.

So far, most of the judgments from users lean toward saving more female than male lives, saving younger people before the elderly, and saving humans over pets. However, when it comes to the question of protecting passengers versus pedestrians, players were divided roughly 50/50.

A moral dilemma for animal fans, or a hard line in favor of humans for future robots?

Image: MIT Moral Machine

You can also design your own scenarios( consider cats vs. dogs ).

According to MIT, the experiment was designed to provide “a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemma, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.”

MIT doesn’t specifically call its project a “game, ” but when youre finished a complete set of scenario questions, the site asks if you’d like to “play again.”

Currently, these selections are just a thought workout, but if people like Lyft co-founder John Zimmer are right, companies rolling out autonomous vehicles will have to grapple with such questions in just a few years.

In the meantime, we can all use the site to consider the implications of tasking machines with attaining life and death decisions for passengers and pedestrians alike.

Read more:

Leave a Reply