Can We Create an Ethical Robot?

pepper_robot

Jerry Kaplan, author of the soon-to-be released book, “Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence,” wrote an intriguing essay in today’s Wall Street Journal, ‘Can We Create an Ethical Robot?

Mr. Kaplan offered a few interesting examples of how a robot might behave compared to a human in certain situations:

  • Should your self-driving car swerve to save the life of the child who just chased his ball into the street at the risk of killing the elderly couple driving the other way?
  • Should a robot buy the last dozen muffins at Starbucks while a crowd of hungry patrons looks on?
  • Would it be OK to instruct a self-driving car to re-park itself every two hours in an area that restricts parking in one spot to two hours?

There are a few more examples in the essay, and I am sure you could imagine several situations yourself.

Mr. Kaplan points out that it probably is not enough to just program the robot to follow rules, since sometimes the best decision involves breaking a rule.

The essay raises a lot of interesting points, as do the comments at the end of the article. While some people seem to believe the best answer is to delay the development of robots as long as possible, I do not think that is a realistic view, nor the right view.

Robots are already here, and their capabilities will only continue to increase dramatically.

But despite those dramatic developments, I think having a robot think ethically is a long, long, long time off – in fact I’m not sure it will ever happen.

It seems as if Mr. Kaplan is perhaps really asking a broader question – can we get robots to think/act like humans?

I think one of the biggest problems is that we can’t teach humans what the most appropriate decision is in every situation because it’s not always clear -cut. That’s why we have the phrase “ethical dilemma”. If we always knew what the right response was in every situation, it wouldn’t be an ethical dilemma.

I’m sure if you asked a hundred people about the three situations noted above, you would likely get a variety of answers that all seem perfectly rational and ethical.

So if the people given the responsibility to make robots more human and ethical have a diversity of opinions and values and thus on what the right action is in a given situation, then whose ethics do you “teach” to these robots?

Also, humans do not always behave rationally, nor do they always do the right thing.

So if we want the robots to be more like us, do we program randomness into the robot’s decision-making process, so that they occasionally make “bad”, or illogical choices?

Since humans are not perfect decision makers, then it seems impossible to program a robot to make the perfect decision every time, because again, in many situations, there never is “the” answer.

On thing I am sure of – there will be lots of growing pains as we see more widespread use of robots, but I think that has been true of every new technology development.

But I have an inherent trust in the human race that collectively, we will find the best (not perfect) way to take advantage of the power and promise of robots.

But if the robot designers were looking for a starting point to make robots behave in an optimal way, I do have a suggestion.

Perhaps the robots could be programmed to assess a situation by first answering the question, “What would my mom do?”

Published by

Jim Borden

Accounting Prof. at Villanova; happily married for 30+ years; father of 3 outstanding young men; vegan; interests: fitness, creativity, education, blogging, social media.

Leave a Reply

Your email address will not be published. Required fields are marked *