Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Friday, April 24, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 1

I ran into an interesting article in a forum (original in Chinese) that talked about 10 very famous thought experiments in the fields of philosophy, ethics, and psychology. Since I am getting a Doctor degree in Philosophy (hopefully), and also because I strongly believe these type of questions and experiments are very related in the research of artificial intelligence, I thought I'd share these with you together with my thoughts on the subject. Hope you enjoy!

10. The Trolley Problem

The trolley problem is a well known thought experiment in ethics, first introduced by Philippa Foot, a British philosopher. Trolley is the British term for a tram. The problem goes like this:

A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?

A common answer takes the utilitarianism approach where flipping the switch becomes the obvious option because saving five lives result in a higher utility than saving just one life. But critics of the utilitarianism believe that flipping the switch constitutes a participation in the moral wrong, making one partially responsible for the death when otherwise the mad philosopher would be the sole culprit. An alternative view believes that inactivity under such a circumstance is also unethical. The bottom line: whatever you do, it is unethical. You are doomed either way.

It is reasonable to guess that your choice might vary if the single person happen to be your kid and the group of five consisted of four complete strangers plus your mother-in-law. In that case, you are simply assigning different utility values to different people (with the possibility of a negative utility). You no longer assume all people are equal. And if the group of five also included two other kids of yours, you simply assign the utility values and do the math and then make the "logical" decision (man, I am so cruel here!). This reminded me of a famous darn question people always get asked: if both your mother and your wife fall into the river and neither one knows how to swim, who should you save first? If you ever are asked this question, here's one answer you could use:
I'll jump into the river and drown myself, and we'll all go to heaven together. Now are you satisfied?
When it comes to artificial intelligence, a lot of times the choice is made based on a utility computation. Maybe the utility is computed using some fancy statistical functions. More advanced algorithms might take into consideration of probability or utility functions derived from past observations. Even more advanced algorithms might allow the agent to dynamically change or evolve the utility functions as time progresses -- a sense of learning. The agent will simply compute the utility values following whatever formulas it comes up with and then choose the option that will result with the highest utility. This is why AI agents or robots are normally considered to be very logical and at the same time very inhuman.It would be a long time before an AI agent would find itself trapped in this moral dilemma. (Remember the big computer in the movie War Games? It eventually figured out that the best winning strategy of playing the game of Tic-tac-toe was to not play the game at all).

So how would you design the AI agent or robot to be able to deal with morality, especially when you are giving it a weapon and grant it the permission to fire the weapon? Even we humans don't have clear clues in situations like in the Trolley Problem. Can we expect or require the agent or robot to do better than us? Unfortunately no one knows the right answer at the present time, we can only learn from our mistakes. Let's hope these mistakes are not disastrous and recoverable.




[Update on 8/2/2019]

Ten years have passed since I first posted this blog article. Today, many "self-driving" cars are already running on our roads (have you noticed those napping drivers in Teslas right next to you?), and there are only more to come with VCs and auto makers pouring money into this field. Now the Trolly Problem is becoming as real as it can be. When a self-driving car is faced with the dilemma of making a choice between killing the person on the left or four persons on the right, or even worse, when it needs to decide if it should sacrifice you, the passenger, in order to save four pedestrians, how would you feel about its logical choice? What if you are not the passenger, but the pedestrian, instead? Don't ask me. I don't have an answer.

Read Part 2: The Gettier Problem


Picture of the Day:

You can go here to see more animated portraits like this one.

BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

3 comments:

  1. Yisong7:19 AM

    I think the Chinese room one also applies to AI. In fact it is almost what AI is all about...

    ReplyDelete
  2. Anonymous8:37 PM

    We talked about this at school once it was like there were 2 parts the first part was there is a train that is out of control on one side of the tracks are 5 people working on the railroad would you save them. Basically everyone said yes but then you changed the point of view if a train is out of control and there are the 5 people fixing the road but on the other track is someone very important to you like your children or wife. Which would you choose so it's like a psychological thing.

    ReplyDelete
  3. That fat guy was really fat.

    ReplyDelete