As more and more companies experiment with self-driving cars, new questions arise about how the vehicles should respond in certain situations. If someone runs into the middle of the street, should it swerve and kill the driver or keep going and kill the pedestrian? (from Business Insider)
I’ve written a little bit previously about the ethics of self-driving cars, and in that I mentioned a TED-Ed lesson that looked at such issues.
There are some difficult moral dilemmas associated with this breakthrough technology, and so researchers at MIT have created a fascinating website called “Moral Machine“ to let humans decide what the cars should do in various scenarios.
The Moral Machine is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.
The web site will show you moral dilemmas where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people.
The web site also allows you to design your own scenarios, which you can then be share with others for viewing and discussion.
Here’s a short video that provides an overview of the Moral Machine.
I went through the 13 scenarios (it just takes a few minutes), and there were some challenging decisions to be made. If you would like to see my results and how they compared to others that have gone through the scenarios, just click here. I also shared them on Twitter, as shown below.
— Jim Borden (@jimborden) September 19, 2016
There are no right or wrong answers to these types of questions, but it is a great idea to get these sort of discussions going before the technology is too widespread.