10. The Trolley Problem
The trolley problem is a well known thought experiment in ethics, first introduced by Philippa Foot, a British philosopher. Trolley is the British term for a tram. The problem goes like this:
A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?
A common answer takes the utilitarianism approach where flipping the switch becomes the obvious option because saving five lives result in a higher utility than saving just one life. But critics of the utilitarianism believe that flipping the switch constitutes a participation in the moral wrong, making one partially responsible for the death when otherwise the mad philosopher would be the sole culprit. An alternative view believes that inactivity under such a circumstance is also unethical. The bottom line: whatever you do, it is unethical. You are doomed either way.
It is reasonable to guess that your choice might vary if the single person happen to be your kid and the group of five consisted of four complete strangers plus your mother-in-law. In that case, you are simply assigning different utility values to different people (with the possibility of a negative utility). You no longer assume all people are equal. And if the group of five also included two other kids of yours, you simply assign the utility values and do the math and then make the "logical" decision (man, I am so cruel here!). This reminded me of a famous darn question people always get asked: if both your mother and your wife fall into the river and neither one knows how to swim, who should you save first? If you ever are asked this question, here's one answer you could use:
I'll jump into the river and drown myself, and we'll all go to heaven together. Now are you satisfied?When it comes to artificial intelligence, a lot of times the choice is made based on a utility computation. Maybe the utility is computed using some fancy statistical functions. More advanced algorithms might take into consideration of probability or utility functions derived from past observations. Even more advanced algorithms might allow the agent to dynamically change or evolve the utility functions as time progresses -- a sense of learning. The agent will simply compute the utility values following whatever formulas it comes up with and then choose the option that will result with the highest utility. This is why AI agents or robots are normally considered to be very logical and at the same time very inhuman.It would be a long time before an AI agent would find itself trapped in this moral dilemma. (Remember the big computer in the movie War Games? It eventually figured out that the best winning strategy of playing the game of Tic-tac-toe was to not play the game at all).
So how would you design the AI agent or robot to be able to deal with morality, especially when you are giving it a weapon and grant it the permission to fire the weapon? Even we humans don't have clear clues in situations like in the Trolley Problem. Can we expect or require the agent or robot to do better than us? Unfortunately no one knows the right answer at the present time, we can only learn from our mistakes. Let's hope these mistakes are not disastrous and recoverable.
Read Part 2: The Gettier Problem
Picture of the Day:
here to see more animated portraits like this one.