Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Friday, May 08, 2009

Robot of the Day: Microswimmer Robot Taking Pictures of Your Intestines

This is a robot I read about recently, a robot the size of a pill that can swim inside your body powered by external MRI machine's magnetic fields with the task of taking pictures of your intestines.

Credit: Tel Aviv University and Brigham & Women's Hospital
The robot was developed by scientists at Tel Aviv University in Israel and Brigham & Woman's Hospital in Boston and introduced in a recent paper published in Biomedical Microdevices. Although still in the early testing stage in a water tank, the tiny robot seems to maneuver well with its 20mm x 5mm tail. The eventual objective of robot is to enable doctors to see the inside of a patient's intestines and detect early stages of gastrointestinal cancer.

Presently doctors can already have patients swallow a pill-sized camera. Pictures are taken every half second or so until the camera is passed. However, doctors have no control of the camera's movement when they wanted pictures of specific part of the body. One challenge with the robot idea is how to power the robot because embedding large power supplies would increase the size of the robot and cause other problems. The beauty of the proposed solution is that it uses magnetic field to control the movement of the robot via the little copper and polymer tail. And since MRI machines are already a common device in hospitals, they can become very handy.

I remember seeing a video of how live bacterias can be controlled by magnetic fields to push around a "nano-robot" in a Discovery Channel documentary. In this case, the magnetic field controls the robot directly which is probably more predictable than trying to control bacterias.

Some people even suggest that we can leave such robots inside us permanently like parasites. Now imagine having a bunch of these things inside your body, constantly posting images or videos online of your internal like those live web cams.... Now you not only have no privacy outside of your body, you don't even have privacy inside your body.... Well, but if lives can be saved, I guess it's okay.

To read more about this, click here.





The good news: the robot is reusable!
The bad news: the robot is reusable!



Thursday, May 07, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 5

Read Part 4: Einstein's Light Beam

6. The Ship of Theseus

One of the oldest of all thought experiments is the paradox known as the Ship of Theseus, which originated in the writings of Plutarch. It describes a ship that remained seaworthy for hundreds of years thanks to constant repairs and replacement parts. As soon as one plank became old and rotted, it would be replaced, and so on until every working part of the ship was no longer original to it. The question is whether this end product is still the same Ship of Theseus, or something completely new and different. If it’s not, at what point did it stop being the same ship? The Philosopher Thomas Hobbes would later take the problem even further: if one were to take all the old parts removed from the Ship of Theseus and build a new ship from them, then which of the two vessels is the real Ship of Theseus? For philosophers, the story of the Ship of Theseus is used as a means of exploring the nature of identity, specifically the question of whether objects are more than just the sum of their parts.


I couldn't help but think of the story about Steve Jobs and his Mercedes Benz. Steve was able to exploit a hole in Californian law and roved the Silicon Valley in a Mercedes without license plates:

It turns out there's a provision in California regulations that give one six months to get license plates for a new car, and Jobs took advantage of it. Yes, he leased a silver Mercedes SL55 AMG, said Callas -- and every six months he traded it in for a new one.
So to Steve, the car was still The Car of Jobs, but to the Californian DMV, the car was a different one.

It might not matter too much if this thing we talk about is just a physical thing like Steve Jobs' car. What if it is an intangible object, for example, a song. If we move up or down the pitch of all notes in the song, is it still the same song? If you think the answer is yes, then what if we move the pitch to a range where human can no longer distinguish them?

Now let's think about robots. US Soldiers in Afghanistan have being using remotely-controlled robots to detonate road-side bombs. As unintelligent as these robots are, many soldiers have developed close affection toward these devices. When damaged robots are sent in for repair, the normal procedure is to simply send back a replacement unit because it is most time-efficient and cost-efficient. However, many soldiers demanded the exact same unit or robotics device to be repaired and sent back because they have assigned a personal identity to the robot as a teammate and friend.

Let's think one step further. With robots that are more intelligent, especially ones that learn from past experience, it is still possible for us to duplicate the programs and memory of the robot (with the exception of Johnny 5) and then load the same programs and memory into an identical robot. Now we run into a real identity crisis -- both for the robots and the users. Both robots will think they are the original robot, and to the user, both robots are the original robots with the same memory, same logic, and same appearance. (Why do I keep thinking of the movie The 6th Day?) What problems will this create?

I also can't help but think of all the protagonists in these reincarnation novels (like the Joy of Life story I am translating, for example). Are these persons still the same persons? Probably not! But why not?

That's enough philosophical discussion for today! Have a good day!


Read part 6: Galileo's Gravity Experiment


Video of the Day:

Enjoy this beautiful song The Velocity of Love, and the beautiful video while you struggle with philosophical thought experiments!


BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Wednesday, May 06, 2009

Robot of the Day: CQ-10 Snowgoose Cargo Delivery Glider UAV


CQ-10 Snowgoose is a glider UAV developed by Mist Mobility Integrated Systems Technology (MMIST), a Canadian firm, for the purpose of pinpoint precision small cargo delivery. It is one of the earliest UAVs (Unmanned Aerial Vehicles) used by US military in the war in Afghanistan.

The Snowgoose UAV uses a parafoil for lift and then a "pusher" propeller to help with the glide. A newer version of the UAV CQ-10B actually includes an autogyro rotor for lift and is capable of vertical take off and landing. Therefore the UAV is capable of three types of deployments:
  • ground launching from the back of a truck or Humvee,
  • air launching from the back of a cargo plane,
  • or self launching using the gyro rotor.
The Snowgoose UAV has six modular cargo bays and can be used for leaflet dispensing or delivery of small amount of ammo or medical supplies. The newer model can carry up to 2400 lbs of cargo and travel up to 93 miles. Operator can upload flight plans to the UAV and then the UAV will perform the delivery fully autonomously. This can be very helpful in search and rescue missions, disaster relief efforts, or support military operations in hostile environment.


In 2003, U.S. Special Operations Command bought five of these for $250,000 each and deployed them to Afghanistan in support of special operations in the tough terrains.

With the steep price tag, it is unlikely to see such UAVs used in search and rescue missions (with the exception of this story), which is kind of a shame. That's why cheaper and smaller Micro-UAVs stand better chance of actually deployment in local search and rescue teams.


Video of the Day:

A woman floating on a surfboard near Santa Cruz, California almost ended up on the lunch menu for a humpback whale.

Tuesday, May 05, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 4

Read Part 3: The Ticking Time Bomb

7. Einstein's Light Beam

According to his book Autobiographical Notes, Albert Einstein’s famous work on special relativity was inspired by a thought experiment he conducted when he was only 16 years old. He thought about chasing a beam of light as it traveled through space, and reasoned that if he were able to move next to it at the speed of light, he should be able to observe the light frozen in space as “an electromagnetic field at rest though spatially oscillating.” For Einstein, this thought experiment proved that for his imaginary observer “everything would have to happen according to the same laws as for an observer who, relative to the Earth, was at rest.”

The video below gives a good example of special relativity.



I can't think of how this relates to AI, but the lesson here is that simple thought experiments can lead to extraordinary findings. Therefore I'll keep running all kinds of thought experiments in my head like how properties of intelligence should remain the same regardless of what species is displaying intelligent behaviors.

Read Part 5: The Ship of Theseus


 

I love staring at the starry sky, because that lets me stare right into the past.






BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Sunday, May 03, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 3

Read Part 2: The Gettier Problem


8. The Ticking Time Bomb

Imagine that a bomb or other weapon of mass destruction is hidden in your city, and the timer on it will soon strike zero. You have in your custody a man with knowledge of where the device is planted. Do you resort to torture in order to get him to give up the information?

Sounds like an action movie? It's only another thought experiment called "ticking time bomb". Like the trolley problem, the ticking time bomb scenario is an ethical problem that forces one to choose between two morally questionable acts.

Thanks to many fictional movies and TV shows, the ticking time bomb scenario is one of the most discussed thought experiments. The US government has laws against torturing prisoners (Although we all know how well that law is being followed. Think Abu Ghraib.), but would breaking the law be justified if large number of people's lives can be saved? A British news article extended the scenario and asked if one would be willing to resort to torturing the man's wife and children as a means of extracting the information from him. This sounds really scary now!

According to Wikipedia:
In September 2002, when reviewing Alan Dershowitz's book, Why Terrorism Works: Understanding the Threat, Responding to the Challenge, Richard Posner, a judge of the United States Court of Appeals for the Seventh Circuit, wrote in The New Republic, "If torture is the only means of obtaining the information necessary to prevent the detonation of a nuclear bomb in Times Square, torture should be used--and will be used--to obtain the information.... No one who doubts that this is the case should be in a position of responsibility."

Morgan from the Center for American Free Thoughts gave an interesting discussion in the video below (sorry, I can't find a commercial-free version, so this will do).


I don't know which side I should support. It is quite a dilemma. However, I am one hundred percent sure that we shouldn't allow the government to easily slap the title of "terrorists" on innocent citizens and then torture him/her or take away their basic rights and claim it is "justified".

Read Part 4: Einstein's Light Beam


Picture of the Day:

This spectacular photo looks like it is coming out of a Sci-Fi movie. It is actually taken by a Russian girl who snuck into a Russian military rocket factory.



BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Saturday, May 02, 2009

Robot of the Day: Air Hogs Gyro X RC Helicopter

Maybe because of the frequent sightings of helicopters flying over our house, my three-year-old son developed this special interest in helicopters and always wanted a helicopter. And since my research is related to robotic airplanes, I also have a strong urge to own a robotic airplane myself. So when I found out about the Black Friday deal on the Air Hogs Gyro X RC Helicopter, I jumped on it, under the claim that the helicopter was really the birthday present for my son's three-year-old birthday!

The RC helicopter normally sells for $40, but I was able to get one at half price thanks to Black Friday sales at RadioShack. It is a remote-controlled toy airplane, so the kit included a controller that lets the user controls the throttle (to fly the helicopter up and down) together with an omnidirectional stick to control the flying directions. The controller also has a wheel on the side that allows you to trim the plane (adjusting the balance of the plane so it doesn't keep rotating in one direction). The controller also acts as a charger. Powered itself by four AA batteries, the controller has a wire to connect to the airplane for charging the battery on the airplane.

This RC toy can actually be categorized as a robotics device because it has built-in a "gyro" electronic stabilization system for smooth flight. The "gyro" sensor can sense the roll of the plane and then adjust the speed and direction of the small tail propeller to automatically stabilize the plane. What this means is that a beginner can easily focus on the throttle of the helicopter (controlling the altitude) and not worry about keeping the helicopter in an upright position. In a sense, the "auto-pilot" on the tiny plane can take over some of the responsibility for keeping the plane hovering in the same spot, which is extra nice because now even my three-old son can fly this thing around the house.

In robotics terminology, this type of function is called "Shared Control". For example, you can direct a ground mobile robot to go toward a certain direction, but the robot is capable of going around obstacles autonomously, so you don't have to worry about it. Although in the RC helicopter case, the stabilization autonomy falls pretty low in Tom Sheridan's Levels of Autonomy, it is a start. The robotic planes we use in our research also can stabilize themselves in the air in various wind conditions and maintain a constant speed. And once we load the terrain data into the control station, the UAVs can also maintain their height-above-ground. With GPS capabilities, the research UAVs can also follow way points.

The $20 Air Hog RC Helicopter of course is not that sophisticated. Besides, GPS works terribly in an indoor environment. However, it is totally possible that I could use some computer vision program to estimate the position of the plane and then send control signals from a computer instead of the RC controller. Then the little plane might display a slightly higher intelligence.

Another great thing about this helicopter is its durability. You can crash it left and right without worrying about damaging the device (which is a rare thing in robot world). The biggest downside is that the tiny battery in the plane only flies for about 5 minutes with a full charge --- frankly, a bit too short for me, especially when time seems to zoom by quickly when I have a great time flying this thing. Then it takes 20-30 minutes to charge. The upside about this is that it really teaches my kids that patience is a virtue.


Anyway, this RC helicopter is a great toy for beginner operators and kids. If you want to read a more detailed review of this RC Helicopter, click here.


Picture of the Day:

Staples.com Black Friday Fail! Only a programmer will get a kick out of this!

Friday, May 01, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 2

Read Part 1: The Trolley Problem


9. The Gettier Problem (The Cow in the field)

One of the major thought experiments in epistemology (the field of philosophy that deals with knowledge) is what is known as “The Cow in the Field.” It concerns a farmer who is worried his prize cow has wandered off. When the milkman comes to the farm, he tells the farmer not to worry, because he’s seen that the cow is in a nearby field. Though he’s nearly sure the man is right, the farmer takes a look for himself, sees the familiar black and white shape of his cow, and is satisfied that he knows the cow is there. Later on, the milkman drops by the field to double-check. The cow is indeed there, but it’s hidden in a grove of trees. There is also a large sheet of black and white paper caught in a tree, and it is obvious that the farmer mistook it for his cow. The question, then: even though the cow was in the field, was the farmer correct when he said he knew it was there?


The Cow in the Field was first used by Edmund Gettier as a criticism of the popular definition of knowledge as “justified true belief”—that is, that something becomes knowledge when a person believes it; it is factually true; and they have a verifiable justification for their belief. In the experiment, the farmer’s belief that the cow was there was justified by the testimony of the milkman and his own verification of a black and white object sitting in the field. It also happened to be true, as the milkman later confirmed. But despite all this, the farmer did not truly know the cow was there, because his reasoning for believing it turned out to be based on false premises. Gettier used this experiment, along with a few other examples, as proof of his argument that the definition of knowledge as justified true belief needed to be amended. The video below shows another example of the Gettier Problem.


A robot or an AI agent can acquire knowledge in several distinct ways. The easiest one (at least for the programmer) is to memorize facts. For example: the capital of the United States is Washington D.C., earth is a sphere, and a triangle has three sides. These are beliefs we forcefully inject into the agent's brain and the agent might blindly take them in as faith. AI agents are great at storing facts and can store large quantities of facts. This is similar (roughly) to us human learning in elementary schools.

Another way of acquiring knowledge is to learn the rules and then apply rules to different problems. For example: don't run into an obstacle. Abstracting and representing rules can be quite challenging for designers, that's why robots today don't have a lot of rules programmed into them. Having too many rules can also exponentially increase the computational complexity can cause internal conflicts, unless the robot is designed to ignore rules at times or only apply rules that can help optimize or maximize certain utilities, like how we humans do at our convenience. However, once the rules are implemented, robots are great at executing them (as long as the rules are clearly defined). For example, we already have AI agents that can solve or generate proofs for very complicated math problems, even better than the human counterparts. This method is similar (roughly) to us human learning in middle schools. Learning by demonstration probably falls under this category as well.

A third way of acquiring knowledge for robots and AI agents is by the means of traditional machine learning using existing data sets. Whether supervised learning (where records in a data set are labeled by human) or unsupervised learning (no labels), the basic idea is that the agent would try to "rationalize" the data sets and then find some consistent properties, or "insights", in the data sets, and then be able to apply them to new information (generalize). This is similar (roughly) to us human learning in college where we are exposed to a lot of facts, but we have to come up with general senses of these facts and then conclude with our own, newly identified rules. Agents are normally bounded by "features" identified by human who provided the data sets. However, few smart agents can try to come up with "features" of their own and it falls under the name of "Active Learning".

Yet another way of acquiring knowledge for these artificial beings is through Bayesian networks (logical nodes interconnected like a neural network). Given that a good Bayesian network exists (or one that's pretty good at self-evolving), the agent first have some a priori beliefs of things (e.g., sky is blue and grass is green) acquired either through previous mentioned methods or simply non-informative (e.g., a woman is probably just as lazy as a man). Then through observations, the agent learns from experience and obtain a posteriori knowledge. The new knowledge might be completely opposite to the a priori beliefs, therefore the agent modifies its beliefs of the existing rules, previous facts, and the world and everything in the universe. You probably already see where I am going. This is similar (roughly) to us human beings learning in grad schools.

Not to ridicule things, by the time the agent becomes really imaginative and start to question everything simply based on reports from lower-level agents (hmm...grad school robots?), we make it a professor. (I hope my advisor is not reading this...)

Anyway, back to the original topic, IMHO, we can't always rely on justified true beliefs, but isn't at least trying to justify the belief better than blind beliefs? Of course when it comes to authority, my robots don't have to justify its beliefs, because to them, I am God!

Read Part 3: The Ticking Time Bomb


Video of the Day:

Great examples of illusions. So we shouldn't trust what we see with our eyes. Does this mean we shouldn't trust everything we see?


BTW: The easiest way to remember my blog address is http://lanny.lannyland.com