Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Tuesday, May 05, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 4

Read Part 3: The Ticking Time Bomb

7. Einstein's Light Beam

According to his book Autobiographical Notes, Albert Einstein’s famous work on special relativity was inspired by a thought experiment he conducted when he was only 16 years old. He thought about chasing a beam of light as it traveled through space, and reasoned that if he were able to move next to it at the speed of light, he should be able to observe the light frozen in space as “an electromagnetic field at rest though spatially oscillating.” For Einstein, this thought experiment proved that for his imaginary observer “everything would have to happen according to the same laws as for an observer who, relative to the Earth, was at rest.”

The video below gives a good example of special relativity.



I can't think of how this relates to AI, but the lesson here is that simple thought experiments can lead to extraordinary findings. Therefore I'll keep running all kinds of thought experiments in my head like how properties of intelligence should remain the same regardless of what species is displaying intelligent behaviors.

Read Part 5: The Ship of Theseus


 

I love staring at the starry sky, because that lets me stare right into the past.






BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Sunday, May 03, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 3

Read Part 2: The Gettier Problem


8. The Ticking Time Bomb

Imagine that a bomb or other weapon of mass destruction is hidden in your city, and the timer on it will soon strike zero. You have in your custody a man with knowledge of where the device is planted. Do you resort to torture in order to get him to give up the information?

Sounds like an action movie? It's only another thought experiment called "ticking time bomb". Like the trolley problem, the ticking time bomb scenario is an ethical problem that forces one to choose between two morally questionable acts.

Thanks to many fictional movies and TV shows, the ticking time bomb scenario is one of the most discussed thought experiments. The US government has laws against torturing prisoners (Although we all know how well that law is being followed. Think Abu Ghraib.), but would breaking the law be justified if large number of people's lives can be saved? A British news article extended the scenario and asked if one would be willing to resort to torturing the man's wife and children as a means of extracting the information from him. This sounds really scary now!

According to Wikipedia:
In September 2002, when reviewing Alan Dershowitz's book, Why Terrorism Works: Understanding the Threat, Responding to the Challenge, Richard Posner, a judge of the United States Court of Appeals for the Seventh Circuit, wrote in The New Republic, "If torture is the only means of obtaining the information necessary to prevent the detonation of a nuclear bomb in Times Square, torture should be used--and will be used--to obtain the information.... No one who doubts that this is the case should be in a position of responsibility."

Morgan from the Center for American Free Thoughts gave an interesting discussion in the video below (sorry, I can't find a commercial-free version, so this will do).


I don't know which side I should support. It is quite a dilemma. However, I am one hundred percent sure that we shouldn't allow the government to easily slap the title of "terrorists" on innocent citizens and then torture him/her or take away their basic rights and claim it is "justified".

Read Part 4: Einstein's Light Beam


Picture of the Day:

This spectacular photo looks like it is coming out of a Sci-Fi movie. It is actually taken by a Russian girl who snuck into a Russian military rocket factory.



BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Saturday, May 02, 2009

Robot of the Day: Air Hogs Gyro X RC Helicopter

Maybe because of the frequent sightings of helicopters flying over our house, my three-year-old son developed this special interest in helicopters and always wanted a helicopter. And since my research is related to robotic airplanes, I also have a strong urge to own a robotic airplane myself. So when I found out about the Black Friday deal on the Air Hogs Gyro X RC Helicopter, I jumped on it, under the claim that the helicopter was really the birthday present for my son's three-year-old birthday!

The RC helicopter normally sells for $40, but I was able to get one at half price thanks to Black Friday sales at RadioShack. It is a remote-controlled toy airplane, so the kit included a controller that lets the user controls the throttle (to fly the helicopter up and down) together with an omnidirectional stick to control the flying directions. The controller also has a wheel on the side that allows you to trim the plane (adjusting the balance of the plane so it doesn't keep rotating in one direction). The controller also acts as a charger. Powered itself by four AA batteries, the controller has a wire to connect to the airplane for charging the battery on the airplane.

This RC toy can actually be categorized as a robotics device because it has built-in a "gyro" electronic stabilization system for smooth flight. The "gyro" sensor can sense the roll of the plane and then adjust the speed and direction of the small tail propeller to automatically stabilize the plane. What this means is that a beginner can easily focus on the throttle of the helicopter (controlling the altitude) and not worry about keeping the helicopter in an upright position. In a sense, the "auto-pilot" on the tiny plane can take over some of the responsibility for keeping the plane hovering in the same spot, which is extra nice because now even my three-old son can fly this thing around the house.

In robotics terminology, this type of function is called "Shared Control". For example, you can direct a ground mobile robot to go toward a certain direction, but the robot is capable of going around obstacles autonomously, so you don't have to worry about it. Although in the RC helicopter case, the stabilization autonomy falls pretty low in Tom Sheridan's Levels of Autonomy, it is a start. The robotic planes we use in our research also can stabilize themselves in the air in various wind conditions and maintain a constant speed. And once we load the terrain data into the control station, the UAVs can also maintain their height-above-ground. With GPS capabilities, the research UAVs can also follow way points.

The $20 Air Hog RC Helicopter of course is not that sophisticated. Besides, GPS works terribly in an indoor environment. However, it is totally possible that I could use some computer vision program to estimate the position of the plane and then send control signals from a computer instead of the RC controller. Then the little plane might display a slightly higher intelligence.

Another great thing about this helicopter is its durability. You can crash it left and right without worrying about damaging the device (which is a rare thing in robot world). The biggest downside is that the tiny battery in the plane only flies for about 5 minutes with a full charge --- frankly, a bit too short for me, especially when time seems to zoom by quickly when I have a great time flying this thing. Then it takes 20-30 minutes to charge. The upside about this is that it really teaches my kids that patience is a virtue.


Anyway, this RC helicopter is a great toy for beginner operators and kids. If you want to read a more detailed review of this RC Helicopter, click here.


Picture of the Day:

Staples.com Black Friday Fail! Only a programmer will get a kick out of this!

Friday, May 01, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 2

Read Part 1: The Trolley Problem


9. The Gettier Problem (The Cow in the field)

One of the major thought experiments in epistemology (the field of philosophy that deals with knowledge) is what is known as “The Cow in the Field.” It concerns a farmer who is worried his prize cow has wandered off. When the milkman comes to the farm, he tells the farmer not to worry, because he’s seen that the cow is in a nearby field. Though he’s nearly sure the man is right, the farmer takes a look for himself, sees the familiar black and white shape of his cow, and is satisfied that he knows the cow is there. Later on, the milkman drops by the field to double-check. The cow is indeed there, but it’s hidden in a grove of trees. There is also a large sheet of black and white paper caught in a tree, and it is obvious that the farmer mistook it for his cow. The question, then: even though the cow was in the field, was the farmer correct when he said he knew it was there?


The Cow in the Field was first used by Edmund Gettier as a criticism of the popular definition of knowledge as “justified true belief”—that is, that something becomes knowledge when a person believes it; it is factually true; and they have a verifiable justification for their belief. In the experiment, the farmer’s belief that the cow was there was justified by the testimony of the milkman and his own verification of a black and white object sitting in the field. It also happened to be true, as the milkman later confirmed. But despite all this, the farmer did not truly know the cow was there, because his reasoning for believing it turned out to be based on false premises. Gettier used this experiment, along with a few other examples, as proof of his argument that the definition of knowledge as justified true belief needed to be amended. The video below shows another example of the Gettier Problem.


A robot or an AI agent can acquire knowledge in several distinct ways. The easiest one (at least for the programmer) is to memorize facts. For example: the capital of the United States is Washington D.C., earth is a sphere, and a triangle has three sides. These are beliefs we forcefully inject into the agent's brain and the agent might blindly take them in as faith. AI agents are great at storing facts and can store large quantities of facts. This is similar (roughly) to us human learning in elementary schools.

Another way of acquiring knowledge is to learn the rules and then apply rules to different problems. For example: don't run into an obstacle. Abstracting and representing rules can be quite challenging for designers, that's why robots today don't have a lot of rules programmed into them. Having too many rules can also exponentially increase the computational complexity can cause internal conflicts, unless the robot is designed to ignore rules at times or only apply rules that can help optimize or maximize certain utilities, like how we humans do at our convenience. However, once the rules are implemented, robots are great at executing them (as long as the rules are clearly defined). For example, we already have AI agents that can solve or generate proofs for very complicated math problems, even better than the human counterparts. This method is similar (roughly) to us human learning in middle schools. Learning by demonstration probably falls under this category as well.

A third way of acquiring knowledge for robots and AI agents is by the means of traditional machine learning using existing data sets. Whether supervised learning (where records in a data set are labeled by human) or unsupervised learning (no labels), the basic idea is that the agent would try to "rationalize" the data sets and then find some consistent properties, or "insights", in the data sets, and then be able to apply them to new information (generalize). This is similar (roughly) to us human learning in college where we are exposed to a lot of facts, but we have to come up with general senses of these facts and then conclude with our own, newly identified rules. Agents are normally bounded by "features" identified by human who provided the data sets. However, few smart agents can try to come up with "features" of their own and it falls under the name of "Active Learning".

Yet another way of acquiring knowledge for these artificial beings is through Bayesian networks (logical nodes interconnected like a neural network). Given that a good Bayesian network exists (or one that's pretty good at self-evolving), the agent first have some a priori beliefs of things (e.g., sky is blue and grass is green) acquired either through previous mentioned methods or simply non-informative (e.g., a woman is probably just as lazy as a man). Then through observations, the agent learns from experience and obtain a posteriori knowledge. The new knowledge might be completely opposite to the a priori beliefs, therefore the agent modifies its beliefs of the existing rules, previous facts, and the world and everything in the universe. You probably already see where I am going. This is similar (roughly) to us human beings learning in grad schools.

Not to ridicule things, by the time the agent becomes really imaginative and start to question everything simply based on reports from lower-level agents (hmm...grad school robots?), we make it a professor. (I hope my advisor is not reading this...)

Anyway, back to the original topic, IMHO, we can't always rely on justified true beliefs, but isn't at least trying to justify the belief better than blind beliefs? Of course when it comes to authority, my robots don't have to justify its beliefs, because to them, I am God!

Read Part 3: The Ticking Time Bomb


Video of the Day:

Great examples of illusions. So we shouldn't trust what we see with our eyes. Does this mean we shouldn't trust everything we see?


BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Thursday, April 30, 2009

Random Thoughts: A Few Brain Teasers

Sharing some brain teasers I found interesting. See how many you can figure out yourself:




Problem 1: The Missing Dollar
==========================

Image credit: Canvas

Three guests check into a hotel room. The normal charge for one night is $30, so each guest pays $10. Later the owner said that there's a promotion and the bill should only be $25. To rectify this, he gives the bellhop $5 to return to the guests. On the way to the room, the bellhop realizes that he cannot divide the money equally. As the guests didn't know the total of the revised bill, the bellhop decides to just give each guest $1 and keep $2 for himself.
Now that each of the guests has been given $1 back, each has paid $9, bringing the total paid to $27. The bellhop has $2. If the guests originally handed over $30, what happened to the remaining $1?


Problem 2: Cut the Loss
==========================

Image credit: iStockPhoto

One day a customer came to Tom's store. He collected $20 worth of merchandize, and then gave Tom a $50 bill. Since Tom didn't have enough change, he went to Charlie's store next door and exchanged the $50 bill into changes. Then he gave the customer $30 change. Shortly after the customer left, Charlie came to Tom and told him that the $50 bill was counterfeit money. So Tom immediately gave Charlie another $50 bill in exchange. So after all, how much money did Tom lose from this incident? Was it $50, $80, or $100?


Problem 3: Green Onion Vendor
==========================

Image credit: The Virginian Pilot

A customer came to the green onion vendor and asked about the price of the green onions. The vendor said that the green onions sell for one dollar per pound. Since he had a total of 100 pounds, it would cost one hundred dollars to buy them all. The customer then asked if the vendor would consider selling green onion stems and leaves separately. The vendor told him that the green onion stems would sell for 70 cents and the green onion leaves would sell for 30 cents. The customer then decided to buy 50 pounds of green onion stems and 50 pounds of green onion leaves. The vendor then calculated as the following:


50 x 0.7 = 35, 50 x 0.3 = 15, 35 + 15 = 50

So the customer paid $50 and left. Now the green onion vendor was very confused: how come the customer was able to take away his $100 worth of green onions for only $50?


Problem 4: Find the Odd Ping Pong Ball
==========================

Image credit: 123RF.com

You have 12 ping pong balls. One of them is slightly heavier OR lighter than the others. You also have a balance scale. How can you find the odd ball out, and if it is heavier or lighter, in only 3 weighings. Please help!


Problem 5: Silver Utensils Problem
==========================

Image credit: Shutterstock

Bill wanted to buy some silver utensils, so he went to the utensil store and then found out that he only had enough money to buy 21 forks and 21 spoons, or 28 knives only. Since he needs to buy the same number of forks, spoons and knives to make complete sets, and he would really prefer to use up all the money he had brought, can you help him out?


Check out the answers to these problems here.



Video of the Day:

A great review of year 2011 based on what people searched in Google: