Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Showing posts with label AI and Robots. Show all posts
Showing posts with label AI and Robots. Show all posts

Thursday, July 09, 2020

Google trained ResNet50 in 0.47 seconds, setting new MLPerf AI training record

Google has built the world's fastest ML training supercomputer and was able to set new records in six out of eight MLPerf benchmarks.

Speedup_of_Googles_best_M.0860040916660770.png

For example, they trained ResNet50 in 0.47 seconds. What does this really mean? According to Import AI newsletter:

Multi-year progress: These results show the time it takes Google to train a ResNet50 network to convergence against ImageNet, giving us performance for a widely used, fairly standard AI task:
- 0.47 seconds: July 2020, MLPerf 0.7.
- 1.28 minutes: June 2019, MLPerf 0.6.
- 7.1 minutes: May 2018, MLPerf 0.5.
- Hours - it used to take hours to train this stuff, back in 2017, even at the frontier. Things have sped up a lot.

You can read more about it from Google's Blog.

But What Does This Really Mean?

In a recent video posted by Lex Fridman, he talked about how the very hot GPT-3 compares to a human brain. Specifically, he told us that GPT-3 has 175 billion parameters and costs a whopping $4.6 million to train.


So here's what it means:
Only tech giants like FAANG, Microsoft, and alike can afford the hardware and money to train large networks like this, and small startups/players really don't stand a chance.
You can learn what GPT-3 is in 3 minutes in this article.

So anyway, while we celebrate the advancement of AI, we also have to be careful about the increased impact capital has on innovation. I really do not want this to turn into something like our legal system where big corporations always win even though they "lose" the case or have to settle.






A pandemic does not just go away no matter how much you ignore it, deny it, or wish it would just magically disappear.







BTW: The easiest way to remember my blog address is http://blog.lannyland.com

Saturday, July 20, 2019

A great summary article on what happened inside Google for the last three years

Sharing a good read, an article on WIRED summarizing all the events happened inside Google for the last three years. WIRED talked with 47 current and former Google employees and then produced a great story explaining the bumpy road Google had to take in the last three years. The article was written by NITASHA TIKU, a senior writer at WIRED.


THREE YEARS OF MISERY INSIDE GOOGLE, THE HAPPIEST COMPANY IN TECH



BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Wednesday, July 17, 2019

10 Famous Thought Experiments That Just Boggle Your Mind Part 10

Read part 9: Schrodinger's Cat

1. Brain in a Vat

Imagine a mad scientist has taken your brain from your body and placed it in a vat with life
sustaining fluid. Neurons in your brain are wired into a supercomputer which can general all the sensing signals your brain normally receives. Thus, this computer has the ability to simulate your everyday experience. If this were indeed possible, how could you ever truly prove that the world around you was real, and not just a simulation generated by a computer?

This thought experiment is called “brain in a vat” and has to be the most influential thought experiment that touches the subjects from cognitive science and philosophy to popular culture. The idea for the experiment, which was popularized by Hilary Putnam, an American philosopher, dates all the way back to the 17th century philosopher Rene Descartes. In his book Meditations on the First Philosophy, Descartes questioned whether he could ever truly prove that all his sensations were really his own, and not just an illusion caused by an “evil daemon.” Descartes accounted for this problem with his classic maxim “cogito ergo sum” (“I think therefore I am”). Unfortunately, the brain in a vat experiment complicates this argument, too, since a brain connected to electrodes could still think.





This really sounds too familiar, you say, to the movie The Matrix. Well, that film, along with several other sci-fi stories and movies, was heavily influenced by the brain in a vat thought experiment. Neo was hooked into this big simulation called The Matrix, and before he unplugged, he thought that was his real life. Then turned out Zion, the last sanctuary for human kind, is merely another layer of the matrix, another simulation.



There are really two perspectives here:

The first perspective is that from humans. 

When we humans dream, our brains can experience signals that are pure simulations generated by the brain itself. We can see, smell, touch, walk, jump, run, fall off a cliff or a tall building (this one happens pretty much to everyone), and even fly. The simulation generated by the brain is so good that we could interact with the world and the world changes accordingly, for example, when we turn in a dream, the world rotates correctly "in front of" our eyes.

Once in a dream of mine, I was able to correctly recognize that I was in a dream. Since anything is possible in a dream, I tested it. "Let there be a spear!" I commanded. And a giant spear magically appeared in my grip. Next thing I know I was there swinging the spear, showing off a beautiful staff form. Unfortunately, the dream ended shortly. To this day, I don't know why I asked for a spear when I could have anything I want in a dream. That's also why my kids now call me Shakespear from time to time.

So how do you know you are not dreaming? How do you know you are not in a simulation? There's no spinning pendulum to give you a clue like that in the movie Inception.




Humans are also very good at simulating input signals to our brains when we are awake. That's why we enjoy books, plays, and movies. We put ourselves into the characters minds, or merely become a bystander in an imaginary world. Then we experience joy, sorrow, love, and hatred from things that never truly happened in the real world.

Then there's the world of MMORPGs (Massively multiplayer online role-playing game) from text-based MUD (Multi-user Dungeon) games in the early days, to EverQuest, to World of Warcraft, to Minecraft today. Players of such games get deeply immersed into the simulated worlds and sometimes preferred the fake worlds over the real one.

The other perspective is that from Artificial Agents.

In the Matrix Trilogy, Neo had no idea that he was actually not even a real human. When a software agent has been told that the world it lives in is the real world, it is just like a brain in a vat. When Agent Smith realized that, he went all haywire!

Software agents we build today are frequently developed or trained in simulations. It is true that most simulations today are not very sophisticated (but the agents themselves don't know), but they will get better over time. And the agents we train or send to work will also get more sophisticated and more intelligent. What if one day they become so intelligent that they realize they live in a simulation? What happens then if they break out of the simulation and start feeling the real world with real sensors and interact with the real word with real actuators? Will we end up with Jane (from Ender's Game series)? Or Philip (from The Outcast)?

Now comes the ultimate question: How do you know you are a real human, not just an artificial intelligence agent?

In the movie Total Recall, Arnold Schwarzenegger couldn't really tell if his Martial adventure was just the imaginary vacation he paid for or he really was a spy. And in the movie Inception, Leonardo DiCaprio also wasn't sure if his return to home and his kids were real or merely another dream. In a sense, Arnold and Leonardo were both in the superposition of both in a dream and also in reality (why do I keep thinking of Schrödinger’s Cat?). Arnold stayed confused. But Leonardo made a choice to accept it as the reality, even though the pendulum never stopped spinning. So maybe what matters the most is whether you choose to believe the world around you is real. But then what would you do if you were presented with the blue pill and the red pill?

Man, this is deep!! My brain hurts (whether it is in a vat or not). This concludes the 10 Famous Thought Experiments That Just Boggle Your Mind series (only took 10 years). Hope you had a good read!!

Now on to my daily battles!

BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Tuesday, July 16, 2019

10 Famous Thought Experiments That Just Boggle Your Mind Part 9

Read part 8: The Chinese Room

2. Schrodinger's Cat

Schrödinger’s Cat is a paradox relating to quantum mechanics that was first proposed by the physicist Erwin Schrödinger. He hypothesized a scenario where a cat is sealed inside a box for one hour along with a radioactive element and a vial of deadly poison. There is a 50/50 chance that the radioactive element will decay over the course of the hour. If it does, then a hammer connected to a Geiger counter will trigger, break the vial, release the poison, and kill the cat. Since there is an equal chance that this will or will not happen, Schrödinger argued that before the box is opened the cat is simultaneously both alive and dead.

Schrödinger meant to use this thought experiment to demonstrate the absurdity of Bohr's Copenhagen interpretation of quantum mechanics, where a quantum system remains in superposition until it interacts with, or is observed by the external world. Turned out that:
Successful experiments involving superpositions of relatively large (by the standards of quantum physics) objects have been performed.



I still remember the day when my Statistics Professor Dr. Reese threw a coin to the floor and then immediately put his foot on top of it. "The event has already occurred. But what is the outcome?" he asked. That was a great example of the Bayesian world, where the reality has already happened, but we still don't know -- the uncertainty.

Uncertainty is actually everywhere in AI/Machine Learning/Robotics challenges. For example, an object detection model detected a person, but only with 50% confidence. That means the object detected could be a person or not a person at all. And the bounding box detected could be spot on, perfectly bound the person inside, or it could be 50% off with a 50% IOU (Intersection Over Union). So how do you use this information if you have to decide if there is a visitor by your door? In this case, the event has already happened (there's an object there, whether person or not). There are also cases where you don't know what future holds. If you are in a self-driving car, the self-driving car has no idea what the car next to it will do in the next minute, it could just stay in its lane, or it could change into your lane and collide with your car. A well-designed self-driving car has to be able to deal with uncertainty, maybe when it notices the car next lane starts to weave, maybe it's a good idea to break a bit and keep a good distance from it. But just like Schrödinger’s Cat, before you open the box, anything is possible.

One solution is to try to obtain more observations that might give you more information and reduce the uncertainty. Maybe if you shake the box and hear the cat meow, then you know the cat is likely still alive. Maybe you wait 20 years, and then likely the cat is dead. Actually you don't even have to wait that long, a cat probably will die if it eats no food for a week? What is the cat is pregnant when you put it in the box. You would now have multiple cats that are both dead and alive and you don't even know how many cats you have.


Let's end this with a picture of cute kittens. Didn't Jeff Dean build a deep neural network that learned the concept of a cat all by itself simply by watching lots and lots of youtube videos?


Read part 10: Brain in a Vat

BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Monday, July 15, 2019

10 Famous Thought Experiments That Just Boggle Your Mind Part 8

Read part 7: Monkeys and Typewriters

[Found a 10-year old draft of this post that was never published. So here it is with some new contents added.]

3. The Chinese Room (Turing Test)

Source: Wikicomms
The Chinese Room is a famous thought experiment first proposed in the early 1980s by John Searle, a prominent American philosopher. Searle first hypothetically assumes that there exists a computer program that can translate Chinese into English and vise versa. Now imagine a man who only speaks English is placed in a sealed room with only a slot in the door. What he has are an English version of the computer program and plenty of scratch paper, pencils, erasers, and file cabinets. He would then receive Chinese characters through the slot, process them following the program instructions, and then produce Chinese characters on paper that he can slip out through the slot on the door. Although he doesn’t speak a word of Chinese, Searle argues that through this process the man in the room could convince anyone on the outside that he was a fluent speaker of Chinese.

Searle wanted to answer the question of whether a machine can truly "understand" Chinese, or does it only simulate the ability to understand Chinese. By creating this thought experiment, Searle argues that the man in the room does not really understand Chinese, therefore, the machine can also be a simulation and doesn't have a "mind" to understand the information, even if the machine can produce responses that gives people the impression of human intelligence.





This thought experiment raises a big question: Does something that appear to be intelligent truly possesses the intelligence. Searle calls the system that actually possesses the intelligence "Strong AI", and the system that doesn't "Weak AI". He argues that this thought experiment proves that "Strong AI" is false.

Today, there are many AI/Machine Learning algorithms at work that perform tasks for us human beings, from chatbots to self-driving cars. Especially with the popularity of deep neural networks, computers can do an amazing job at recognizing things such as human, cars, or stop signs. But how does the machine know an object is a stop sign? With the deep learning approach, we humans don't really know. Interestingly, with just changing the values of a few pixels, an object that still looked like a stop sign to humans now becomes a sign for speed limit 45 sign. And that's the danger of using blackbox systems, where misclassifying the sign could mean the difference of life and death.




This thought experiment is an extension of the Turing Test, which deserves a blog post entirely dedicated to that topic. Turing proposed that if a computer can fool human judges into thinking they are conversing with a real human through a typing only chat program, then the computer is considered to have artificial intelligence and has passed the Turing Test.

Based on this simple definition, many programs could be considered as having passed the Turing Test. in 1964, a program named ELIZA out of the MIT AI Lab gained fame by making users believe they were actually chatting with a psychotherapist, when in fact, the program simply was parroting back at patients what they'd just said. Then later in 2007, a Russian chatbot that emulated a woman was able to fool many lonely Russian males into giving out personal and financial details, given that these males probably couldn't think straight especially after excessive vodka consumption. In both cases, these chatbot programs appeared to be intelligent, when they don't really truly understand what humans had said to them.

[Fun fact: You can talk to a Martian version of ELIZA in Google Earth.]





On March 23, 2016, Microsoft released a chatter bot named Tay via Twitter. Only 16 hours later, Microsoft had to shutdown the bot because it started posting all kinds of inflammatory tweets. Was Tay really a racist? Of course not! But it sure looked like it was intelligent enough to get conversations going.


In one of our Innovation Weeks at work, I actually played with a bunch of chatbots, including cleverbot and Mitsuku (Pandorabots), and integrated them with smart home/smart assistant functions. Mitsuku chatbot has won the Loebner Prize 4 times out of the last 6 years, so it is really up there with respect to its capabilities. During the live demo, when I asked the bot "where is my wife?" it actually replied, "Did you check the bathroom?" Very impressive!! Things got a bit weirder when I had a volunteer talking with the bot and the bot started asking him, "How is your father?"

Earlier this year, OpenAI's researchers unveiled GPT-2, a text-generating algorithm that can write news articles if you give it the beginning part.
[GPT-2] has no other external input, and no prior understanding of what language is, or how it works,” Howard tells The Verge. “Yet it can complete extremely complex series of words, including summarizing an article, translating languages, and much more.
This is a perfect example of the Chinese Room scenario. We have AI that can behave as if it is intelligent, yet has no understanding of language. I guess we have to be super careful about what tasks we give such AI agent/algorithms/models. It might be able to fool us most of the time. But when it comes the time when it fails, because it is only a fancy simulation, we are in big trouble.


Read part 9: Schrodinger's Cat



Picture of the Day:

Amazing depiction of how I felt about my thesis during grad school years... (picture credit: http://www.phdcomics.com)

BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Wednesday, May 22, 2013

10 Famous Thought Experiments That Just Boggle Your Mind Part 7

Read Part 6: Galileo's Gravity Experiment

4. Monkeys and Typewriters

You probably have heard about the thing about monkeys and typewriters, it is called the “infinite monkey theorem,” also known as the “monkeys and typewriters” experiment. the theorem states that "a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare."

Sounds absurd? Counter-intuitive? That's for sure. But this is all about probability and infinity. The key idea is that even though the probability of such a thing happening is very, very tiny, the probability still exists.

In 2003, science students at a zoo in the U.K. “tested” the infinite monkey theorem when they put a computer and a keyboard in a primate enclosure. Unfortunately, the monkeys never got around to composing any sonnets. According to researchers, all they managed to produce was five pages consisting almost entirely of the letter “s.” Then the lead male began bashing the keyboard with a stone followed by other monkeys urinating and defecating on it.



The monkeys were supposed to be "random generators". And there's the possibility that randomly generated things might turn out to be good things. In Artificial Intelligence and Machine Learning research, genetic algorithms and evolution algorithms are important tools/methods to try to find good solutions in vast state spaces where an exhaustive search is not possible. Such algorithms do need a little bit of "luck" and some extended time to compute, although they are not completely random, but use the help of fit functions to try to go toward the right direction of the solution. They also follow the greedy approach where any step that moves toward the direction of the goal is a good step (this is, however, not necessarily true for the optimal solution). So in a sense, we are systematically generating lots of digital "monkeys" to try to find that piece of wonderful work of Shakespeare. The idea behind this is that maybe given the limited time, we won't be able to achieve Shakespeare, but even if we can get something comparable to a 3rd-Grade level composition, it's a great success, because the creation of such work had no human involvement, it all came from AI.

Interestingly enough, some music writers and composers are prone to the idea of using computer software applications to randomly generate small pieces of music and help these random creations might give them ideas or inspirations of creating their own quality work. Of course the software-generated music were first filtered using AI to get rid of most of the obviously meaningless or bad sequences.


It is worth mentioning that one of the projects I've always want to complete is a Rap Lyric Generator. The idea is that given a music sequence (e.g., Twinkle Little Star music) and a topic (e.g., Robots are awesome), the program would automatically find words, sentences on the Internet that matches the given topic and also rhyme with each other, and then automatically generate lyrics and sing the lyrics using Rap style autonomously. Can you see that this also uses the idea of "digital monkeys" and "invisible typewriters"? However, just like many of my other great ideas, someone will probably beat me to it before I ever find time to work on it.


Read Part 8: The Chinese Room


Video of the Day:

Can monkeys make good coffee?


BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Sunday, December 02, 2012

10 Famous Thought Experiments That Just Boggle Your Mind Part 6

Read Part 5: The Ship of Theseus

5. Galileo's Gravity Experiment

In order to refute Aristotle’s claim that the speed of a falling object is dictated by its mass, Galileo devised a simple thought experiment:

According to Aristotelian logic, if a light object and a heavy object were tied together and dropped off a tower, then the heavier object would fall faster, and the rope between the two would become taut. This would allow the lighter object to create drag and slow the heavy one down. But Galileo reasoned that once this occurs, the weight of the two objects together should be heavier than the weight of either one by itself, therefore making the system as a whole fall faster. This contradiction proved that Aristotle’s hypothesis was wrong.

One of the most famous stories about Galileo is that he once dropped two metal balls off the Leaning Tower of Pisa to prove that heavier objects do not fall faster than lighter ones. In actuality, this story is probably just a legend. However an astronaut did perform this famous Galileo test with a hammer and a feather in vacuum with low gravity on the surface of the moon (see video below).



So what can I learn from this thought experiment? As a Computer Science researcher, many times it is easier for me to just sit down and code up experiments. But sometimes it might be a good idea to stand up and then write down the math formulations on the whiteboard. Simplifying the math calculations or jog down some proofs can actually dramatically reduce the amount of coding I have to do and also improve the performance of my algorithms. Of course, if I could do all these in my head it would be wonderful, but I am no Galileo, and I also have severe short-term memory deficit -- a strong sign and that I am almost ready to graduate!

Read part 7: Monkeys and Typewriters



Video of the Day:

Air Swimmers -- a fun "robot" toy for parties! You can get them here.


BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Thursday, May 07, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 5

Read Part 4: Einstein's Light Beam

6. The Ship of Theseus

One of the oldest of all thought experiments is the paradox known as the Ship of Theseus, which originated in the writings of Plutarch. It describes a ship that remained seaworthy for hundreds of years thanks to constant repairs and replacement parts. As soon as one plank became old and rotted, it would be replaced, and so on until every working part of the ship was no longer original to it. The question is whether this end product is still the same Ship of Theseus, or something completely new and different. If it’s not, at what point did it stop being the same ship? The Philosopher Thomas Hobbes would later take the problem even further: if one were to take all the old parts removed from the Ship of Theseus and build a new ship from them, then which of the two vessels is the real Ship of Theseus? For philosophers, the story of the Ship of Theseus is used as a means of exploring the nature of identity, specifically the question of whether objects are more than just the sum of their parts.


I couldn't help but think of the story about Steve Jobs and his Mercedes Benz. Steve was able to exploit a hole in Californian law and roved the Silicon Valley in a Mercedes without license plates:

It turns out there's a provision in California regulations that give one six months to get license plates for a new car, and Jobs took advantage of it. Yes, he leased a silver Mercedes SL55 AMG, said Callas -- and every six months he traded it in for a new one.
So to Steve, the car was still The Car of Jobs, but to the Californian DMV, the car was a different one.

It might not matter too much if this thing we talk about is just a physical thing like Steve Jobs' car. What if it is an intangible object, for example, a song. If we move up or down the pitch of all notes in the song, is it still the same song? If you think the answer is yes, then what if we move the pitch to a range where human can no longer distinguish them?

Now let's think about robots. US Soldiers in Afghanistan have being using remotely-controlled robots to detonate road-side bombs. As unintelligent as these robots are, many soldiers have developed close affection toward these devices. When damaged robots are sent in for repair, the normal procedure is to simply send back a replacement unit because it is most time-efficient and cost-efficient. However, many soldiers demanded the exact same unit or robotics device to be repaired and sent back because they have assigned a personal identity to the robot as a teammate and friend.

Let's think one step further. With robots that are more intelligent, especially ones that learn from past experience, it is still possible for us to duplicate the programs and memory of the robot (with the exception of Johnny 5) and then load the same programs and memory into an identical robot. Now we run into a real identity crisis -- both for the robots and the users. Both robots will think they are the original robot, and to the user, both robots are the original robots with the same memory, same logic, and same appearance. (Why do I keep thinking of the movie The 6th Day?) What problems will this create?

I also can't help but think of all the protagonists in these reincarnation novels (like the Joy of Life story I am translating, for example). Are these persons still the same persons? Probably not! But why not?

That's enough philosophical discussion for today! Have a good day!


Read part 6: Galileo's Gravity Experiment


Video of the Day:

Enjoy this beautiful song The Velocity of Love, and the beautiful video while you struggle with philosophical thought experiments!


BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Tuesday, May 05, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 4

Read Part 3: The Ticking Time Bomb

7. Einstein's Light Beam

According to his book Autobiographical Notes, Albert Einstein’s famous work on special relativity was inspired by a thought experiment he conducted when he was only 16 years old. He thought about chasing a beam of light as it traveled through space, and reasoned that if he were able to move next to it at the speed of light, he should be able to observe the light frozen in space as “an electromagnetic field at rest though spatially oscillating.” For Einstein, this thought experiment proved that for his imaginary observer “everything would have to happen according to the same laws as for an observer who, relative to the Earth, was at rest.”

The video below gives a good example of special relativity.



I can't think of how this relates to AI, but the lesson here is that simple thought experiments can lead to extraordinary findings. Therefore I'll keep running all kinds of thought experiments in my head like how properties of intelligence should remain the same regardless of what species is displaying intelligent behaviors.

Read Part 5: The Ship of Theseus


 

I love staring at the starry sky, because that lets me stare right into the past.






BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Sunday, May 03, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 3

Read Part 2: The Gettier Problem


8. The Ticking Time Bomb

Imagine that a bomb or other weapon of mass destruction is hidden in your city, and the timer on it will soon strike zero. You have in your custody a man with knowledge of where the device is planted. Do you resort to torture in order to get him to give up the information?

Sounds like an action movie? It's only another thought experiment called "ticking time bomb". Like the trolley problem, the ticking time bomb scenario is an ethical problem that forces one to choose between two morally questionable acts.

Thanks to many fictional movies and TV shows, the ticking time bomb scenario is one of the most discussed thought experiments. The US government has laws against torturing prisoners (Although we all know how well that law is being followed. Think Abu Ghraib.), but would breaking the law be justified if large number of people's lives can be saved? A British news article extended the scenario and asked if one would be willing to resort to torturing the man's wife and children as a means of extracting the information from him. This sounds really scary now!

According to Wikipedia:
In September 2002, when reviewing Alan Dershowitz's book, Why Terrorism Works: Understanding the Threat, Responding to the Challenge, Richard Posner, a judge of the United States Court of Appeals for the Seventh Circuit, wrote in The New Republic, "If torture is the only means of obtaining the information necessary to prevent the detonation of a nuclear bomb in Times Square, torture should be used--and will be used--to obtain the information.... No one who doubts that this is the case should be in a position of responsibility."

Morgan from the Center for American Free Thoughts gave an interesting discussion in the video below (sorry, I can't find a commercial-free version, so this will do).


I don't know which side I should support. It is quite a dilemma. However, I am one hundred percent sure that we shouldn't allow the government to easily slap the title of "terrorists" on innocent citizens and then torture him/her or take away their basic rights and claim it is "justified".

Read Part 4: Einstein's Light Beam


Picture of the Day:

This spectacular photo looks like it is coming out of a Sci-Fi movie. It is actually taken by a Russian girl who snuck into a Russian military rocket factory.



BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Friday, May 01, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 2

Read Part 1: The Trolley Problem


9. The Gettier Problem (The Cow in the field)

One of the major thought experiments in epistemology (the field of philosophy that deals with knowledge) is what is known as “The Cow in the Field.” It concerns a farmer who is worried his prize cow has wandered off. When the milkman comes to the farm, he tells the farmer not to worry, because he’s seen that the cow is in a nearby field. Though he’s nearly sure the man is right, the farmer takes a look for himself, sees the familiar black and white shape of his cow, and is satisfied that he knows the cow is there. Later on, the milkman drops by the field to double-check. The cow is indeed there, but it’s hidden in a grove of trees. There is also a large sheet of black and white paper caught in a tree, and it is obvious that the farmer mistook it for his cow. The question, then: even though the cow was in the field, was the farmer correct when he said he knew it was there?


The Cow in the Field was first used by Edmund Gettier as a criticism of the popular definition of knowledge as “justified true belief”—that is, that something becomes knowledge when a person believes it; it is factually true; and they have a verifiable justification for their belief. In the experiment, the farmer’s belief that the cow was there was justified by the testimony of the milkman and his own verification of a black and white object sitting in the field. It also happened to be true, as the milkman later confirmed. But despite all this, the farmer did not truly know the cow was there, because his reasoning for believing it turned out to be based on false premises. Gettier used this experiment, along with a few other examples, as proof of his argument that the definition of knowledge as justified true belief needed to be amended. The video below shows another example of the Gettier Problem.


A robot or an AI agent can acquire knowledge in several distinct ways. The easiest one (at least for the programmer) is to memorize facts. For example: the capital of the United States is Washington D.C., earth is a sphere, and a triangle has three sides. These are beliefs we forcefully inject into the agent's brain and the agent might blindly take them in as faith. AI agents are great at storing facts and can store large quantities of facts. This is similar (roughly) to us human learning in elementary schools.

Another way of acquiring knowledge is to learn the rules and then apply rules to different problems. For example: don't run into an obstacle. Abstracting and representing rules can be quite challenging for designers, that's why robots today don't have a lot of rules programmed into them. Having too many rules can also exponentially increase the computational complexity can cause internal conflicts, unless the robot is designed to ignore rules at times or only apply rules that can help optimize or maximize certain utilities, like how we humans do at our convenience. However, once the rules are implemented, robots are great at executing them (as long as the rules are clearly defined). For example, we already have AI agents that can solve or generate proofs for very complicated math problems, even better than the human counterparts. This method is similar (roughly) to us human learning in middle schools. Learning by demonstration probably falls under this category as well.

A third way of acquiring knowledge for robots and AI agents is by the means of traditional machine learning using existing data sets. Whether supervised learning (where records in a data set are labeled by human) or unsupervised learning (no labels), the basic idea is that the agent would try to "rationalize" the data sets and then find some consistent properties, or "insights", in the data sets, and then be able to apply them to new information (generalize). This is similar (roughly) to us human learning in college where we are exposed to a lot of facts, but we have to come up with general senses of these facts and then conclude with our own, newly identified rules. Agents are normally bounded by "features" identified by human who provided the data sets. However, few smart agents can try to come up with "features" of their own and it falls under the name of "Active Learning".

Yet another way of acquiring knowledge for these artificial beings is through Bayesian networks (logical nodes interconnected like a neural network). Given that a good Bayesian network exists (or one that's pretty good at self-evolving), the agent first have some a priori beliefs of things (e.g., sky is blue and grass is green) acquired either through previous mentioned methods or simply non-informative (e.g., a woman is probably just as lazy as a man). Then through observations, the agent learns from experience and obtain a posteriori knowledge. The new knowledge might be completely opposite to the a priori beliefs, therefore the agent modifies its beliefs of the existing rules, previous facts, and the world and everything in the universe. You probably already see where I am going. This is similar (roughly) to us human beings learning in grad schools.

Not to ridicule things, by the time the agent becomes really imaginative and start to question everything simply based on reports from lower-level agents (hmm...grad school robots?), we make it a professor. (I hope my advisor is not reading this...)

Anyway, back to the original topic, IMHO, we can't always rely on justified true beliefs, but isn't at least trying to justify the belief better than blind beliefs? Of course when it comes to authority, my robots don't have to justify its beliefs, because to them, I am God!

Read Part 3: The Ticking Time Bomb


Video of the Day:

Great examples of illusions. So we shouldn't trust what we see with our eyes. Does this mean we shouldn't trust everything we see?


BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Friday, April 24, 2009

10 Famous Thought Experiments That Just Boggle Your Mind Part 1

I ran into an interesting article in a forum (original in Chinese) that talked about 10 very famous thought experiments in the fields of philosophy, ethics, and psychology. Since I am getting a Doctor degree in Philosophy (hopefully), and also because I strongly believe these type of questions and experiments are very related in the research of artificial intelligence, I thought I'd share these with you together with my thoughts on the subject. Hope you enjoy!

10. The Trolley Problem

The trolley problem is a well known thought experiment in ethics, first introduced by Philippa Foot, a British philosopher. Trolley is the British term for a tram. The problem goes like this:

A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?

A common answer takes the utilitarianism approach where flipping the switch becomes the obvious option because saving five lives result in a higher utility than saving just one life. But critics of the utilitarianism believe that flipping the switch constitutes a participation in the moral wrong, making one partially responsible for the death when otherwise the mad philosopher would be the sole culprit. An alternative view believes that inactivity under such a circumstance is also unethical. The bottom line: whatever you do, it is unethical. You are doomed either way.

It is reasonable to guess that your choice might vary if the single person happen to be your kid and the group of five consisted of four complete strangers plus your mother-in-law. In that case, you are simply assigning different utility values to different people (with the possibility of a negative utility). You no longer assume all people are equal. And if the group of five also included two other kids of yours, you simply assign the utility values and do the math and then make the "logical" decision (man, I am so cruel here!). This reminded me of a famous darn question people always get asked: if both your mother and your wife fall into the river and neither one knows how to swim, who should you save first? If you ever are asked this question, here's one answer you could use:
I'll jump into the river and drown myself, and we'll all go to heaven together. Now are you satisfied?
When it comes to artificial intelligence, a lot of times the choice is made based on a utility computation. Maybe the utility is computed using some fancy statistical functions. More advanced algorithms might take into consideration of probability or utility functions derived from past observations. Even more advanced algorithms might allow the agent to dynamically change or evolve the utility functions as time progresses -- a sense of learning. The agent will simply compute the utility values following whatever formulas it comes up with and then choose the option that will result with the highest utility. This is why AI agents or robots are normally considered to be very logical and at the same time very inhuman.It would be a long time before an AI agent would find itself trapped in this moral dilemma. (Remember the big computer in the movie War Games? It eventually figured out that the best winning strategy of playing the game of Tic-tac-toe was to not play the game at all).

So how would you design the AI agent or robot to be able to deal with morality, especially when you are giving it a weapon and grant it the permission to fire the weapon? Even we humans don't have clear clues in situations like in the Trolley Problem. Can we expect or require the agent or robot to do better than us? Unfortunately no one knows the right answer at the present time, we can only learn from our mistakes. Let's hope these mistakes are not disastrous and recoverable.




[Update on 8/2/2019]

Ten years have passed since I first posted this blog article. Today, many "self-driving" cars are already running on our roads (have you noticed those napping drivers in Teslas right next to you?), and there are only more to come with VCs and auto makers pouring money into this field. Now the Trolly Problem is becoming as real as it can be. When a self-driving car is faced with the dilemma of making a choice between killing the person on the left or four persons on the right, or even worse, when it needs to decide if it should sacrifice you, the passenger, in order to save four pedestrians, how would you feel about its logical choice? What if you are not the passenger, but the pedestrian, instead? Don't ask me. I don't have an answer.

Read Part 2: The Gettier Problem


Picture of the Day:

You can go here to see more animated portraits like this one.

BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Saturday, April 18, 2009

AI Robot Related Conferences and Journals For My Research (Part 6)

AI Robot Related Conferences and Journals For My Research Part 5 

I have discussed several top conferences related to my research. Now moving on to top symposiums. These symposiums are like workshops where new ideas are presented and discussed to get a reality check from the fellow researchers and also to brainstorm. However, they normally last for several days and give the participants plenty of time to collaborate and discuss things.

Top Symposiums
==================================================================

RO-MAN -- IEEE International Symposium on Robots and Human Interactive Communications

RO-MAN workshop/symposium addresses fundamental issues of co-existence of human and intelligent machines such as robots and recent advancements of technological as well as psychological researches on every aspect of interactive communication and collaboration between robots and humans. Originally founded by several Japaneses researchers in 1992, the symposium has grown to attract much attention from researchers around the world. For example, the last RO-MAN included papers from 17 different countries. Solicited subjects cover a wide range including (but not limited to) ones such as socially interactive robots, entertainment robots, human assisting robots, human training robots, education robots, and robotic arts.

The RO-MAN symposium/workshop is a two-track event held annually, therefore, relatively small, holding about 70 to 280 participants. Papers accepted are mostly six pages long. I have never attended the RO-MAN symposium, and I couldn't find any information on the acceptance rate of the workshop. I would guess the acceptance rate is much lower compared to the top conferences I had blogged about before

Since the last RO-MAN symposium just happened last month, the location for the next RO-MAN symposium Ro-Man 2012 (the 21th) is still unknown at this point.
Conference Dates: July 31-August 3, 2012 (roughly)
Submission Deadline: March 1, 2012 (roughly)



AAAI Spring/Fall Symposium Series

The AAAI Spring/Fall Symposia are great places to meet peer researchers in a more intimate setting and a relaxed atmosphere to share ideas and learn from each other's artificial intelligence research. The topics might change each year depending on symposium proposals received. Multiple symposiums of various topics will be held simultaneously and participants are expected to attend a single symposium throughout the symposium series. Besides the selected participants by the program committee (authors of accepted papers), only a limited number of people are allowed to register in each symposium on a first-come, first served basis, due to limited seats (the Symposium series are actually quite popular).

The Fall Symposium series is usually held on the east coast at Arlington, Virginia during late October or early November.

Each symposium will have a distinct research interest. For example, the AAAI 2011 Fall Symposia have the following seven topics:
  •     Advances in Cognitive Systems
  •     Building Representations of Common Ground with Intelligent Agents
  •     Complex Adaptive Systems: Energy, Information and Intelligence
  •     Multiagent Coordination under Uncertainty
  •     Open Government Knowledge: AI Opportunities and Challenges
  •     Question Generation
  •     Robot-Human Teamwork in Dynamic Adverse Environment
The last one about Robot-Human Teamwork would be the one I am interested in. Sometimes you can find the papers accepted at the symposium-specific web sites. But they really want you to buy the technical report at $35.
 
The next AAAI Fall Symposia AAAI 2011 Fall Symposia will be held at Arlington, Virginia, USA.
Symposia Dates: November 4-6, 2011
Submission Deadline: May 20, 2011


The next AAAI Fall Symposia you can submit a paper to is AAAI 2012 Fall Symposia
Symposia Dates: November 4-6, 2012 (Roughly)
Submission Deadline: May 20, 2012 (Roughly)



The Spring Symposium series is typically held during spring break (generally in March) on the west coast at Stanford. This one is actually my preferred one because Stanford University is not that far away from Utah, and I also lived in the neighborhood for three months.

The next AAAI Spring Symposia include the following six topics:
  •     AI, The Fundamental Social Aggregation Challenge, and the Autonomy of Hybrid Agent Groups
  •     Designing Intelligent Robots: Reintegrating AI
  •     Game Theory for Security, Sustainability and Health
  •     Intelligent Web Services Meet Social Computing
  •     Self-Tracking and Collective Intelligence for Personal Wellness
  •     Wisdom of the Crowd
 I have never attended either the Spring or Fall Symposia, but I was the co-author of a paper that got accepted at the AAAI Spring Symposium 2009 under the topic Agents that Learn from Human Teachers. It would be great if I could publish here again in the near future. It's always fun to visit the Silicon Valley!

The next AAAI Spring Symposium AAAI 2012 Spring Symposia will be held at Stanford University, Palo Alto,California, USA.
Symposium Dates: March 26-28, 2012
Submission Deadline: October 7, 2011



A good friend of mine, Janet, passed away this morning because of Acute Leukemia. Wish her peace in heaven! Lessons learned: Complete all those projects you want to do before a doctor tells you that you only have 5 days to live. Let's see, I need to finish my PhD, finish translating SPW, finish building a robot, and make up all the blog posts. Man! I better get working!




Wednesday, April 15, 2009

AI Robot Related Conferences and Journals For My Research (Part 5)

AI Robot Related Conferences and Journals For My Research Part 4

Top Conferences
==================================================================

BRiMS -- Behavior Representation in Modeling and Simulation

BRIMS is a conference for modeling and simulation research scientists, engineers, and technical communities across disciplines to meet, share ideas, identify capability gaps, discuss cutting-edge research directions, highlight promising technologies, and showcase the state-of-the-art in applications. It focuses on Human Behavior Representation (HBR)-based technologies and practices and bridge the gap across the disciplines of cognitive and computational modeling, sociocultural modeling, graphical and statistical modeling, network science, computer science, artificial intelligence, and engineering.

BRiMS is mainly funded by military research agencies such as Air Force Research Laboratory (AFRL), Army Research Laboratory (ARL), Defense Advanced Research Projects Agency (DARPA), and Office of Naval Research (ONR). Every year, there's always a heavy presence of researchers from these military research labs. Therefore, if you plan to work for a military research lab, this is a great venue to network and meet potential employers.

BRiMS is a single-track conference, hence, a relatively small conference. It does have workshops and tutorials the day before the conference. Interestingly, every other year, the conference is always held at Sundance, Utah, which is only 20 minutes from where I live (which saves my advisor a bunch of money such as the airfare and hotel). Then the other year a location in eastern US will be selected as the conference hosting venue. I have been fortunate to publish at this conference in the past.

The next BRiMS conference BRiMS 2012 (the 21th) will be held at Amelia Island Plantation, Amelia Island, Florida, USA.
Conference Dates: March 12-15, 2012
Submission Deadline: December 10, 2011 (roughly)



DIS -- The ACM Conference on Designing Interactive Systems

DIS is a multi-track conference held biennially. It is the premier, international arena where designers, artists, psychologists, user experience researchers, systems engineers and many more come together to debate and shape the future of interactive systems design and practice. Particularly, it addresses design as an integrated activity spanning technical, social, cognitive, organisational, and cultural factors. As described by the Interaction-Design.org, "DIS conferences are usually attended by people like; user experience designers seeking to go beyond usability; interaction designers developing interactive objects and installations; usability people who want to improve experience for ‘users’; web designers who want to create better Web sites; information architects; user interface designers working across the board, including desktop systems, mobile devices, and interactive products; cognitive and social scientists; human factors folks; games designers involved with characters, narrative and game play; visual designers concerned with information design and the aesthetics of their systems; ethnographers and customer service and many more."

DIS is a prestigious conference which makes competition between submissions high. For example, the acceptance rate for DIS 2008 was 34%. Because interactive design can be used in many aspects, DIS is naturally an interdisciplinary conference, encompassing all issues related to the design and deployment of interactive systems.

The theme of the upcoming DIS 2012 focuses on what happens when interactive systems are used "in the wild". This seems to be a perfect fit for research topics such as using a UAV (Unmanned Aerial Vehicle) in wilderness search and rescue.

The next DIS conference DIS 2012 will be held at Newcastle, UK.
Conference Dates: June 11-15, 2012
Submission Deadline: January 20, 2012


AI Robot Related Conferences and Journals For My Research Part 6





Why do I always forget to eat lunch? This is not the right way to lose weight!



Sunday, April 12, 2009

The challenges of evaluating the search efficiency of a UAV's coverage path (1)

Imagine that you have won a shopping spree sweepstake at the local Walmart. Assuming that you know the layout of the store pretty well and have common sense on how much general merchandises are worth. Now for the next 2 minutes anything you grab are yours to keep for free, what would you do? Would you just start grabbing everything that are close to you such as orange juice, eggs, sausages, and breakfast cereals, or would you dash straight to that 60-inch LCD TV (or that 5 Carat diamond ring if you are a woman) at the furthest corner of the store? What is the best path you should take to maximize the total monetary value of the shopping spree?

Now just to make it a little bit more complicated, what if you only have 1 minute to grab things? Maybe you are asked to start from the cashier's lane and must return to the cashier's lane before your time runs out? What if your shopping cart was tinkered with and it doesn't roll backward? What if getting that 5 Carat diamond ring required a Walmart employee to unlock three things before you can get to it? What if you forgot to bring your glasses and everything looked totally blurry in front of your eyes? Looks like the wonderful dream of winning the sweepstake has just turned into a nightmare! "Why are you making it so hard for me?" you moan. And I shrug and tell you that those are all the challenges I face when I plan a coverage path for a UAV in support of Wilderness Search and Rescue operations.

The benefit of adding a UAV to the Wilderness Search and Rescue team is that now you have an eye in the sky and you can cover large areas quickly and also reach areas that are difficult for human on foot to reach sooner. When planning a coverage path for the UAV with a gimbaled camera, what we really care about is the path of the camera footprint. In our path-planning approach, we use a 4-connect grid to represent the probability distribution of where it is likely to find the missing person. Even though a fix-wing UAV might need to roll and follow a curvy path when it turns, the gimbaled camera can automatically adjust itself to always point straight down and the path of the camera footprint can include sharp 90 degrees turns. As the camera footprint covers an area, it "vacuums up" the probability within that area, and obviously the more probability we can "vacuums up" along the path, the more likely the UAV can spot the missing person.

When we evaluate how good a UAV path is, we focus on two factors: flight time and the amount of probability accumulated along the path. If a desired flight time is set (maybe due to the fact that the battery on the UAV only lasts for one hour), then the more probability we can accumulate, the better the path. If a desired amount of probability is expected (e.g., 80% probability of spotting the person from the UAV), then the sooner we can reach that goal, the better the path. My research focuses on the first type of case only where we plan a path for a given flight duration.

So how good or efficient is the path generated by our algorithm? A natural way to evaluate this is to compare the amount of probability the UAV accumulates along our path against what the UAV can accumulate along the best possible path and compute a percentage. A path with 50% efficiency means the path is half as good as the best we can do. The irony here is that we don't know what the best possible (optimal) path is, and searching for this optimal path might take a long time (years) especially when the probability distribution is complex (like how it is in real search and rescue scenarios), and it defeats the purpose of finding the missing person quickly. Many factors can also affect how the optimal path might turn out and change the total amount of probability accumulated if the UAV follows that path. Here I'll list the ones we must deal with.

1. Desired flight time

If the search region is very small, the UAV has 100% probability of detection (say we are searching in the Bonneville Salt Flats), and you have plenty of UAV flight time to completely comb the area many times, then life gets easier and you can be pretty sure that you will spot the missing person if the UAV follows a lawnmower or Zamboni flight pattern (assuming the missing person stays in a fixed location). If the search region is very large and you have a very short UAV flight time, then maybe there are areas you simple can never reach given the short flight duration. Remember the 2-minute shopping spree vs. a 1-minute one?

2. Starting position and possibly the ending position

If the UAV starts from the middle of the search region, the optimal path will most definitely not be the same as the one when the UAV starts from the edge of the search region. And if the UAV must return to a desired ending position (maybe for easy retrieval or returning to command base), time must be allocated for the UAV for the return flight. Ideally, while flying back, the UAV should still try to cover areas with high probabilities. In the example of the shopping spree case, if you are required to start from the cashier's lane and also return there before time runs out, you probably still want to grab things on your way back, maybe choosing a different route back.

3. Type of UAV (fix-wing vs. copter)

A fix-wing UAV must keep moving in the air in order to get enough lift to maintain airborne. It also cannot fly backward. But a copter type UAV doesn't have these restrictions. It can hover over a spot or fly backward anytime it wants. Therefore, the type of UAV we use can really change how the optimal path looks like. Remember the shopping cart that was tinkered with in your shopping experience?


4. Task difficulty (probability of detection)

Although the UAV provides the bird's eye view of the search area, some times we look but we don't see. Maybe because the dense vegetation makes spotting the missing person a very difficult task; maybe the weather is really bad with lowers the visibility; maybe the missing person is wearing a green jacket that blends in with the surroundings. This means the probability of detection might vary from case to case and search area to search area. When the probability of detection is low, maybe we should send the UAV to cover the area multiple times so we can search better. This factor really adds complexity to the evaluation of a UAV's path's efficiency. When it takes 30 seconds for the Walmart employee to unlock everything and get that 5 Carat diamond ring for you, is it worth the wait? Or maybe grabbing all those unlocked watches at $50 a piece in the neighboring section sounds like a better idea now?

Given all these complicated factors, I still need to find out how well my path-planning algorithm is performing in different search scenarios. In the following blog posts in this series, I'll go through each factor and discuss how we can reasonably evaluate the efficiency of a search path without knowing the optimal solution.


Video of the Day:

A video my friend Bev made when I showed her around the BYU campus!

Saturday, April 11, 2009

AI Robot Related Conferences and Journals For My Research (Part 4)

AI Robot Related Conferences and Journals For My Research Part 3

Top Conferences
==================================================================

RSS -- Robotics: Science and Systems Conference

RSS is a single-track conference held annually that brings together researchers working on algorithmic or mathematical foundations of robotics, robotics applications, and analysis of robotics systems. The very low average acceptance rate of 25% makes the conference a very selective one. Accepted papers cover a wide range of topics such as kinematics/dynamic control, planning/algorithms, manipulation, human-robot interaction, robot perception, estimation and learning for robotic systems, and etc. One thing great about this conference is that all proceedings are available online for free

RSS is also a relatively new conference. The first ever RSS was held in 2005. However, the conference is growing quickly, attracting lead researchers in the robotics community with an expected attendance of over 400 for the next RSS conference. The conference also includes several workshops and tutorials. I have not submitted anything to the RSS conference in the past. It would be really nice if I could get a paper published here.

The next RSS conference RSS 2012 will be held at Sydney, Australia.
Conference Dates: June 27-July 1, 2012 (Roughly)
Submission Deadline: January 17, 2012 (Roughly)



SMC -- IEEE International Conference on Systems, Man, and Cybernetics

The SMC conference is a multi-track conference held annually. It provides an international forum for researchers and practitioners to report the latest innovations,
summarize the state-of-the-art, and exchange ideas and advances in all aspects of systems engineering, human-machine systems, and emerging cybernetics. Wikipedia defines the word Cybernetics as "the interdisciplinary study of the structure of regulatory systems." Cybernetics is closely related to information theory, control theory and systems theory.


The SMC conference is sponsored by the Systems, Man, and Cybernetics Society, whose mission is: "... to serve the interests of its members and the community at large by promoting the theory, practice, and interdisciplinary aspects of systems science and engineering, human-machine systems, and cybernetics. It is accomplished through conferences, publications, and other activities that contribute to the professional needs of its members."

My interest in the conference lies in the human-machine systems track, especially under the topics of adjustable autonomy, human centered design, and human-robot interaction. This would be a good place to publish research related to UAV (Unmanned Aerial Vehicle) and search and rescue robotics.

I have never submitted anything to this conference before and I can't find any information on the acceptance rate for the conference. But one thing for sure, this is not one of those "come and greet" conferences and all papers submitted go through a serious peer-review process.

The next SMC conference SMC 2011 will be held at Anchorage, Alaska, USA.
Conference Dates: October 9-12, 2011
Submission Deadline: April 1, 2011

The next SMC conference you can submit paper to is SMC 2012, which will be held in Seoul, Korea.
Conference Dates: October 7-10, 2012
Submission Deadline: April 1, 2012 (Roughly)

AI Robot Related Conferences and Journals For My Research Part 5






Why is every day so short? Wouldn't it be nice if we don't have to sleep?