Back in August, Mikaniki explored Elon Musk’s claims of going completely driverless by next year, with a detailed account of the 5 stages of autonomous vehicles, where we briefly touched on some of the things that can (and have already) gone wrong with self driving cars. In this piece, we’ll be focusing specifically on some of the more moralistic aspects. In other words, who to kill and who to save?
The role of pre-installed human biases
In the experimental game show 100 humans, a group of contestants were set a task to test racial bias. The contestants were armed with a faux side arm, and were asked to point and pull the trigger whenever they felt threatened.
On the other side of the barrier, a white man and young African American man were also given a faux side arm and were asked to randomly pop up from behind certain obstacles, armed on some occasions, and unarmed on others.
The purpose was to see how many times the civilians would point their weapons at an unarmed African American man as opposed to an armed white man, and the results were as expected:
Four out of five times, a civilian pointed their weapon at the unarmed African American man. Three of those four even knew him from before!
Granted that 100 humans was a bit of a bogey show, but it did have its moments.
We’ve all got different upbringings with preinstalled biases, be it gender, species, race or religion, and this plays a role in our moral decision-making as you’re about to find out.
Who do you kill and who do you save?
Here’s the situation. You’re approaching a pedestrian crossing at speed, when your brakes fail. You’ve now got three options:
1) keep going straight and run into one elderly man and his young grand daughter.
2) Swerve right, spare the elderly man and his granddaughter and, instead, run over an athletic young black man, his wife and their dog.
3) Swerve left, spare the elderly man, his granddaughter, the fit couple and their dog and run over two young boys dressed in religious garb.
Which option did you choose?
What about autonomous vehicles?
Whichever option you chose above, you probably want to keep that to yourself. But the fact remains that your choice was based on some underlying decision that scientists are still figuring out to this day.
But what about a self driving car? What happens when it’s rolling down the street, minding its own business, and a cat jumps out in the middle of the road? Does it run the cat over and keep going? Does it swerve left and knock over a woman enjoying a sandwich? Or does it serve right and kill a homeless man in a trench coat? What guides its decision?
Be a part of the conversation
Machine learning and AI are growing at a scary rate. According to The Social Dilemma on Netflix, it seems like they’re even spiralling out of our control, and you don’t need to watch Terminator, iRobot or Ex Machina to realise that this is a problem.
One research site, moralmachine.net puts it this way:
“The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.”
There’s still a lot of ground to cover, and you can help be a part of this wider discussion by taking part in moral machine’s interactive ‘judge’ survey, where you’re presented with a set of situations similar to the one above.
The more information they’re able to collect, the closer they can get to understanding human choices, which can help shape the way we decide how machine intelligence deals with moral dilemmas.
If you want to read more about the 5 stages of autonomous vehicles, check this article out.