Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Showing posts with label AI and Robots. Show all posts
Showing posts with label AI and Robots. Show all posts

Friday, February 06, 2009

AI and Robots: BYU using computer vision to catch parking violators

In the past, faculty members, staffs, and students at BYU (Brigham Young University) had to obtain and place special stickers on the windshield of their cars every semester if they want to park on campus in designated parking lots. Starting in Fall 2009, thanks to the new computer vision technology adapted by BYU police, this step is no longer necessary.

There are four types of parking lots at BYU: Faculty and Staff Parking, Graduate Student Parking, Undergraduate Student parking, and Visitor Parking. Because faculty parking lots are everywhere on campus, while graduate student and undergraduate student parking lots are relatively further away from the center of the campus (graduate parking lots are slightly closer), many students are tempted to park at faculty parking lots just briefly for a class period of about one-hour. Many used to be able to get away from it because of limited parking officers, but that is probably coming to an end because campus police has a better weapon to fight parking violators.

An automatic license plate recognition system, developed in Israel (I suspect this company) and made its way into US through Canada (don't ask me why), has become a very powerful tool for BYU police to catch parking violators. Cameras installed on top of police cars (as shown in the picture on the left) can automatically take pictures of cars in the parking lots. License plates are recognized and matched against a database to quickly determine if the car can park at the parking lot. An alarm is played when a violator is identified, and with just a push of a button, a parking ticket is automatically generated and printed. Parking officers can now quickly drive around campus multiple times each day and get their job down all with the comfort of sitting in their seats.

The picture on the right shows a closer view of the type of camera in use. The same kind of camera is also used at gated areas to automatically raise the gate for eligible cars. The accompanying software can read 60 plates a second and can recognize a license plate on a car going 120 mph with the help of the high-speed camera and fast computer algorithms in recognizing numbers and letters. The system also a GPS built-in, so images of cars are also geo-tagged with GPS locations in case people forget where they parked their cars.

Inside the police car, a very durable tablet PC is mounted on the panel so the parking officer can interact with the software using a stylus pen. A wireless keyboard can also be used to enter license numbers into the system.

Obvious benefits of the system include: more efficient patrol of lots of parking lots, comfort of staying in the car in extreme weathers (hot or cold), and automatic alert for stolen vehicles. However, this technology also has its drawbacks. For example, in cases of heavy snow (which is not so rare in Utah), the license plate might be covered by snow and not visible. Also since the parking officers can now do most of their job without getting out of the car, special parking spots like the 15-minutes ones are getting less attention and could be abused more frequently. In the past, people who owned multiple vehicles had the option of hanging a badge in one of the cars. This also means only one car is allowed to be parked on campus because there is only one badge. With the new system, since there's no sticker and no badge, all cars can be parked on campus at the same time. Lastly, privacy is also a concern because now the campus police can easily identify when cars are parked where each day.

So how does the recognition work? There are two main challenges: 1. Identify the license plate in the picture. 2. Recognize the license number. I don't know the exact algorithms used in the system, but based on techniques learned form my computer vision class, I certainly can come up with some intelligent guesses. Identifying the license plate in a picture probably relied on edge detection techniques combined with detecting high-contrast areas that also have the rectangular or rhombus shape. A coarse to fine search is also likely. Recognizing the letters and numbers is relatively easy with machine learning classification algorithms such as decision tree or nearest neighbor.

It is worth mentioning that such license plate recognition systems are already widely used by police forces. The video below shows an example. If you live in California, then you probably have heard stories where people get their traffic ticket in the mail together with a picture of their license plate. A friend of mine told me that once he actually received a ticket in the mail together with a link. Following the link, he was able to view a video of himself making a right turn without making a complete stop. How amazing!




A BYU parking officer said the following in an interview:
"With the money we saved in parking sticker costs, we were able to buy the car."

What I probably would add to that is: "With the extra parking tickets we were able to write, I am expecting a much bigger bonus!" Just kidding!

Picture of the Day:

Google street view uses facial recognition software agent to detect faces in photos and then blur them for privacy protection. The software agent dutifully blurred hunger striker Bobby Sandss's face in a street portrait in Belfast. (Click the picture to see more!)

Thursday, January 29, 2009

AI and Robots: Rise of the Machines -- Robotics Professors Discussing AI

Yes, they are coming, the robots and the intelligent machines, into every aspect of our lives. Is this good or bad? Are we understanding humanity better during the process? Or are we really digging our own graves?

Why are people so fascinated by robots? (I know my answer: I want build a robot that do all my work! :) Why do humans have such a dystopian view of the future where robots are concerned? These are some of the questions asked in an interview with Noel Sharkey,a Robotics and Artificial Intelligence professor at University of Sheffield, UK. When asked when the first mass produced robots will have a serious impact on society? Dr. Sharkey expressed his concerns about the advances of military robots. (Remember the Predator, Reaper robots we talked about recently?)

Every roboticists and AI researchers in the US know that the majority of the research in this field are driven by military funding and initiatives. The biggest one of them all is DARPA -- standing for Defense Advanced Research Projects Agency. Many of you might not be aware that the Internet actually was created in a research program of ARPA (the earlier name for DARPA). As a matter of fact, part of my research is funded by the Army Research Lab. After all, there's not much different between Search and Rescue and Seek and Destroy.

I, and I believe a great number of other researchers, are strong believers of Azimov's Three Laws of Robotics, especially the first half of the first one:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Other other hand, I also must admit that the military driving force drove the advancement of technology, which benefits the entire human race. Just to name a few: the Internet, Satellites, cell networks. Therefore, the stance I take on this issue is that robotics and AI technologies developed for military purposes can also be used for normal people, and I shall work very hard to help make that into reality.









So how do we make sure we don't create the Terminator Scenario? Some people believe we should upgrade ourselves and turn ourselves into cyborgs (half human and half machine), so we still dominate the world, instead of robots. A robotics researcher friend of mine at NASA Ames (no mentioning names) holds this view, and a Robotics and Artificial Intelligence professor at the University of Redding, UK, is another strong believer.










My guess is that robots will become more capable and intelligent, and humans will also become more capable with wearable devices or implanted devices. There are already robotic suits enabling wearers to carry weights far exceeding human capabilities, and there are also robotic hands that connect directly to nerves in people's arms controllable by human brain. I am SERIOUSLY not joking about these things, and you'll reading more about them in my future blog posts (specifically under the Robot of the Day label).

Whether to rely on robots or becoming a cyborg is your own choice, but the time that you have to make that choice might not be very far in the future. There's at least one thing clear: We live in a very exciting era of the world, and we should enjoy it!!

Picture of the Day:

I actually have not seen this movie. Is it good?

Wednesday, January 28, 2009

AI and Robots: Insurgents "Hack" U.S. Drones

The Predator series of unmanned drones have been great weapons for the US government to fight terrorists and insurgents (see my previous posts). The US Department of Defense actually plans to replace one-third of military planes with unmanned drones. Any sensible person would have agreed that these unmanned drones are very sophisticated and advanced technologies highly classified because of the military and intelligence associations. However, on December 17, 2009, an article on Wall Street Journal reviewed that:
Militants in Iraq have used $26 off-the-shelf software to intercept live video feeds from U.S. Predator drones, potentially providing them with information they need to evade or monitor U.S. military operations.
WHAT???!!! When I first heard this on the NPR radio, I was on my way home, and I almost swerved off the road at this shocking news! This is unbelievable! I know the video downlink from our research UAVs are not encrypted, which means anyone with an antenna, a comm box, and a video capturing software could download video feeds from our planes. But come on! We are talking about 12 million dollars drones used by the U.S. Air Force and the CIA to fight real wars! Haven't they ever heard of the word "Encryption"? Even my neighbors' wireless networks are encrypted and cannot be accessed without a password. This is simply beyond my pithy understanding! The article used the word "hack" in the title. Did the insurgents really had to hack? The door was wide-open.

Imagine a bored terrorist pulling his laptop out to kill some free time by watching some real-time war clips, then he sees his terrorist friend's bunker in the live video feed, so he calls his friend and says, "Start your SkyGrabber program, man, I think that's your bunker!"
U.S. military personnel in Iraq discovered the problem late last year when they apprehended a Shiite militant whose laptop contained files of intercepted drone video feeds. In July, the U.S. military found pirated drone video feeds on other militant laptops, leading some officials to conclude that militant groups trained and funded by Iran were regularly intercepting feeds.





According to Dan Verton, a cyberterrorism expert, "we thought that this particular enemy was either incapable or not interested in learning how to do this...we've always been wrong on both accounts!" This is simply amazing! Didn't we know there's the Internet, and you can find tutorials for anything you want? Amazing!

"It is part of their kit now."


That's got to be the best line of the story! Especially at the thought that maybe they didn't even have to buy multiple copies and simply used pirated copies.

What's the merit of the story? Human stupidity is far more powerful than machine intelligence! As a matter of fact, I'll make that my Tao of the Day!





Human stupidity is far more powerful than machine intelligence!

Friday, January 16, 2009

AI and Robots: StarCraft AI Competition to be held at AIIDE 2010

The Fifth Artificial Intelligence for Interactive Digital Entertainment Conference (AIIDE 2010), one of the conferences organized by Association for the Advancement of Artificial Intelligence (AAAI), will be held in October 2010 at Stanford University (as always). And the organizers have recently announced that they will be hosting a StarCraft AI Competition at the conference. AI researchers all over the world will have the chance to let their AI system compete in a Real Time Strategy (RTS) platform, and the final matches will be held live at the conference.

The idea of having AI agents compete with each other in gaming environments is nothing new. In fact, in one of the AI classes I took at BYU, we had to program agents to compete with other teams playing the game of BZFlag, a Capture the Flag game using tanks. The winning team gets an automatic A for the class. That was certainly a lot of fun, even though we didn't win the end of semester competition (because of a bug that confused our agents occasionally between home base and enemy base, doh!), we, as human players, had a hard time beating the agents we created ourselves.

In 2007, I went the the AAAI conference held in Vancouver, BC. At that conference, there were two live AI competitions. One was the General Game Playing Competition, where AI agents would compete in games they have never played before (all they know is the game logic at the competition time). The winning agent then played a game of Pacman against a real human player, and was able to force a tie! The other one was the Computer Poker Competition, and the winning agents challenged two real-world Vegas professional poker players with real money on the table ($50,000). Although the professional poker players narrowly defeated the poker playing software, the two players felt as if they were playing against real human.

What makes this StarCraft AI Competition unique are:
  • StarCraft is a very popular game with a commercial rendering engine and beautiful graphics.
  • It is a Real Time Strategy (RTS) game where the player controls many characters at the same time and had to manage game play strategies both at the macro and micro level.
The following video shows the kind of game play one would expect to see in StarCraft. Make sure you watch the HQ version in full screen mode to really appreciate the beautiful real-time graphic rendering.


Follow this link to get more info about how to use Broodwar APIs to write bots to work with the StarCraft game engine. If I haven't been buried in papers Piled Higher and Deeper, I probably just writing some agents for fun!

There are, of course, other commercial game engines used for AI and robotics research. For example, the game engine for the very popular First-Person Shooting game Unreal Tournament had been turned into USARSim (Unified System for Automation and Robot Simulation), a high-fidelity simulation of robots and environments.


Now my question is: when will EA Sports ever release APIs for their FIFA 2010 video game, so I can write software agents that play the game of soccer like real professionals (at least graphically)?



Picture of the Day:


 
BYU Computer Science Department Building
(See that big Y on the mountain?)

Tuesday, January 13, 2009

AI and Robots: Highschool Students Register With Their Faces

In a previous post we discussed challenges to facial recognition apps and what people had to do (or choose to do) to get by (or bypass it). Does that mean the technology is not ready for the real world? Today we'll see a case where it is used in real world environment and is actually working quite well.

At the City of Ely Community College in UK, sixth-graders are now check-in and out of school registers using their faces. The facial recognition technology is provided by Aurora and the college is one of the first schools in UK to trail the new technology with its students.

So how does the technology work? The scanning station is equipped with infra-red lights and a regular video camera. Each infra-red sensor actually has two parts: an emitter and a receiver. The emitter shoots out an series of infra-red signals and the receiver detects the infra-red lights deflected back by objects in front of the sensor (a simple example would be the auto-flushing toilets in public restrooms). Then by analyzing the strength and pattern of the received signals, the sensor can sense how far the object is from the sensor. This allows the scanner to create a range (depth) image of the object in front of it. So the resulting image is a 3D surface, unlike a regular 2D image from a camera.

Combining this 3D surface with the 2D image taken from the video camera, features are extracted from the entire data set, then each set of features is tagged with a student ID (we know which face it is because each student has to be scanned at the very beginning so the data can be stored in the database). At the time of the scan, it is a simple machine learning classification problem, and I suspect that they probably just used nearest neighbor to match features with an individual student. You can click the image below to see a video of this from the original news article.

Click image to see video.
So how do people like this high-tech face recognition system? Principal Richard Barker said:
With this new registration technology, we are hoping to free up our teachers' time and allow them to spend it on what they are meant to be doing, which is teaching

As for the students, they love the idea of taking responsibility for their ow n registration and using Mission Impossible-style systems.


So why did this specific application turn out to be a success? That's the question we really should be asking. I think we have to attribute the success to the following factors:
  • This is a combination of 3D depth image with a 2D image, which allows the creation of many features (and some of them got the job done).
  • The college has a relatively small number of six-grader students. Classification becomes easier when you don't have to recognize a face out of millions of faces (like in the airport security check case).
  • The student is also required to enter a pin. This further improves accuracy. I guess the facial recognition technology is really there to prevent students from signing other people in and out.
  • Most importantly, the consequence of errors is very low. What if a face is not recognized correctly? The worst that could happen is a erred record in the registration. It's not like that the student would be marked as a terrorist at an airport, which could have severe consequences.
I certainly hope to see more and more successful facial recognition applications out there people can focus on what they enjoy to do instead of what they have to do.

Picture of the Day:

I think this would make a perfect picture for today.
Here I present: Lanny in 3D





Monday, January 12, 2009

AI and Robots: No Smile Allowed, When Technology Is Not Good Enough.

Since I've been struggling with my hand recognition application, which is far easier than face recognition, I thought I discuss some more about facial recognition applications.

In a previous post, I talked about how current facial recognition built-into laptops can easily be hacked. Today we'll talk about another real application of facial recognition, and specifically, what do people do when the technology fails.

About 20 states in the US use facial recognition technology with driver's licenses. To fight identify fraud, one standard procedure at DMVs is that the DMV employee would looked at the old photo of a person to see if it looked like the person seeking a new license. Using facial recognition technology, this step can be automated to improve efficiency, and the technology also, supposedly, allows the detection of facial features that are not easy to recognize by human, thus improve the accuracy of the detection.

The Indiana Bureau of Motor Vehicles recently rolled out a new set of rules governing how people must be photographed on their driver's license photos. Unfortunately, Indiana drivers are no longer allowed to smile. Smiling is taboo alongside glasses and hats.

What's going on here? Turned out the new restrictions are in place because the smiling can distort facial features measured by the facial recognition software according to BMV officials.

It is very interesting to see the kind of restrictions placed on users when the technology should have done the job. Here's something that for sure will improve the accuracy of the facial recognition even more: How about requiring all drivers to get a crew cut (men and women) and to be clean shaven?

I simply can't resist to show this picture below, which is part of the grooming standard in BYU's Honor Code, which I am openly opposed to.


Facial recognition technology was also tested at airports in hope to detect terrorists, but failed miserably, as expected.

"According to a story by the Boston Globe, the security firm which conducted the tests was unable to calibrate the equipment without running into one of two rather serious problems. When it's set to a sensitive level, it 'catches' world + dog. When it's set to a looser level, pretty much any idiot can escape detection by tilting his head or wearing eyeglasses."


The most popular facial recognition algorithm used today is SVM (Support Vector Machine) because of its good performance with real world data. The video below demonstrate how well the algorithm works (also using Gabor wavelets).




Anyway, I think there is still a long way to go for facial recognition technology to be useful in serious applications. Frankly, I am no good at facial recognition myself. A lot of times, I rely on hair style, glasses wore to help me remember people's faces. However, I don't think it is a good idea to impose lots of restrictions on the user because the technology is not good enough. That's my 2 cents.

Newton Moment: when you do things that are considered silly by normal people simply because you are too focused in thinking about your research.

Exceeding wife's tolerance threshold for the number of Newton Moment per day can cause serious consequences.



Video of the Day:
Try detect this face!



Friday, January 09, 2009

AI and Robots: Researchers Hack Facial Recognition Authentication

Yesterday we talked about fingerprints. These days, another form of biometric identifier has gained more attention -- facial recognition. In fact, several laptop manufactures have included facial recognition as authentication caveats to attract more consumers. The selling pitch was that facial recognition is more convenient for the user than entering user name/password pair. The video below is a fun commercial by Lanovo promoting such technology.




Similarly, Toshiba also included such technology with their latest product lines.




However, several Vietnamese researchers have demonstrated at BlackHat DC 2009 how these facial recognition authentications can easily be cracked using photos of the user or multiple phony facial images in kind of "brute-force" attacks. The key lies in how the facial recognition algorithms work. They only treat the user's face as digital images, therefore, by manipulating lighting conditions and view angles of photos, such authentication systems can be easily fooled even though the security level is set to high. The researchers also wrote a paper describing their work. Too bad they didn't make a video of the presentation at the conference. There is, however, an interview with Duc Nguyen, the main researcher.




Here's also a link to an article with more details.
These Windows XP and Vista laptops come with built-in webcams that work with the facial-recognition technology. This form of authentication is considered more convenient than fingerprint scans and more secure than traditional passwords. The software scans the user's face and stores the images and facial characteristics. Then the user can log in by scanning his or her face, which is then matched against the image data.


Don't get me wrong here. Facial recognition is a great technology. However, we have to be very careful about how we apply new technologies to real world problems. Using facial recognition to customize/personalize things (such as seat positions in cars) is great. But using it for security authentication might not be that good of an idea just like RFID tags (see my other post for a discussion on that).





Don't let stupid reviewers ruin your joyful life. There's always another conference out there, waiting for you!

Saturday, January 03, 2009

AI and Robots: The Dark Side of Human-Robot Interaction

After three days of struggle, I finally restored my email server back to working state successfully and had all mailboxes working (except my own, which didn't matter much). I have to admit that it was quite some frustration I had to go through. Now for those of you admins out there, had your server ever blown up on you? And did you ever feel like you want to blow up your server?

Well, the admins at ShopperMagic not only felt so, they actually did so.
We decided to give a web server early retirement in a manner that allowed us to "feel good" because it had kept support staff up for many nights trying to sort it out. The server gets a reprogramming it will never forget by stuffing it with fireworks, lighting the blue touch paper, and retiring to a safe distance...


And if you prefer no fire/smoke for potential fire hazard, you could also just let nature (gravity) take its course.


When machines don't deliver the performances they are expected to provide, the frustration can really build up for the user. And when the level of frustration exceeds a threshold, sometimes it turns into violent behaviors. Many of you probably remember this famous video below from several years back:


And of course, who could ever forget this classic scene from the movie "Office Space":


Suppose this machine you are frustrated with is not a computer or printer, but a robot. With the current state of robotic technology and the complexity of tasks, end users are probably more likely to get frustrated with "intelligent" machines/robots such as this one:


Here's another example of someone getting really frustrated with a robot:


By the way, the robot can be just as frustrated.


So in cases like these, as the frustration builds up, would you still beat up the robot?


Or kick it?


Or blow it up?


With the first few videos, most people probably would find them hilarious despite the violence involved. However, with the last two videos, don't you feel something is not quite right here? Something...well...maybe something immoral that makes you uncomfortable? If so, why is that?

Maybe it's the human form that bothered you? Maybe it's the animal kind of behavior? Or the level of intelligence displayed? This reminds me of something I read a long, long time ago. I don't remember who said it, when, and where. It was a conversation about what kind of animals one would eat. And the answer was, "If it talks back to me, then I won't eat it." Here a simple metric of language and communication capability is used to classify whether an animal is intelligent enough. And if it is intelligent enough to talk back, then it would feel immoral to treat it as food. Of course, there are many metrics we can use to judge intelligence. So once we classify a robot as intelligent, would you feel immoral to hit it, or treat it like a mere lifeless machine?


This sounds like a very dangerous territory in robotic research, but it is a problem we'll eventually have to face (maybe even not very far from now). So is there something we can do as a designer (not a lawyer or legislator) to address such issues? Should we make the robot appear/sound very machine-like or appear/act dumb to alleviate our moral guilt? Or maybe make them more human-like to amplify it, instead? I don't have the answer. Do you?




If you have a hard time falling asleep, try reading a Bayesian statistics textbook.



Monday, June 30, 2008

AI and Robots: What is AI (2)

This is a make up post! Continuing from previous post.

Traditionally, AI researchers believed that perception (sensor data) about the world feeds into cognition (the brain of the agent), and then cognition would issue commands for action (actuator) that will affect the world. This seemed to be the right model that mimic human behaviors. However, in mid-1980s, a then a junior faculty member at Stanford, Rodney Brooks, proposed a different model in which perception directly interacted with action, and cognition simply observes perception and action. He further followed this idea and started a new branch of AI called Behavior-based Robotics.

Rodney believes the path to creating intelligent creatures is only by building actual physical creatures that respond to the complexity of the environment in which they must navigate, and all the power of intelligence arose from the coupling of perception and actuation systems (even just the simple "muscle reflexes" ). The robot shown on the left is an example. It was able to balance, walk and prowl with simple reactive controls (in another word, it doesn't have a brain). It seemed that intelligent, complex behaviors emerged out of simple reactions. Later biology research was able to confirm that the balancing skills of cats come directly from the spinal cord, instead of the brain. As Rodney describes it, intelligence is "in the eye of the beholder".

For example, we all know that a sunflower always turns its "head" toward the sun (Heliotropism ). This appeared to be somewhat of an "intelligent" behavior. However, such behavior is simply the result of chemical reaction. With the increase of potassium ions the osmotic potential in the pulvinus cells becomes more negative and the cells absorb more water and elongate, turning the face of the flower to the sun.

If you are still not convinced, here's another exmaple (try at your own risk). Gently touch a burning stove top with your finger and then observer. If you ponder it carefully, you might realize that your hand moved away from the burning stove top before you actually felt the burning sensation. The seemingly "intelligent" behavior of moving your hand away happened before your brain could even sense the pain, so how could it issue a command to retract your hand? The answer is that the command didn't come from your brain at all.

That's why I tend to lean toward the last definition of artificial intelligence: systems that act rationally. It doesn't matter much how intelligence was achieved; what matters the most is that I made myself believe the agent displayed intelligent behaviors.

So where did the name "artificial intelligence" come from? In 1956 (probably the most important year in the history of AI), John McCarthy (shown in the picture on the right), a researcher at Dartmouth College then, convinced Marvin Minsky, Claude Shannon and Nathaniel Rochester to help him organize a two-month workshop at Dartmouth in the summer of 1956. There were 10 attendees in all, including Tranchard More, Arthur Samuel, Ray Solomonoff, Oliver Selfridge, Allen Newell, and Herbert Simon. The workshop didn't lead to any new breakthorughts, but it did introduce all the major figures to each other and for the next 20 years, the field would be dominated by these people and their students and colleagues. The most important thing that came out of the workshop was an agreement to adopt McCarthy's new name for the field: "artificial intelligence".

The term "Artificial Intelligence" is an oxymoron. How could something "artificial" be "intelligent"? That's why for most people, there's always a mythical component to it. An artifact appears "intelligent" because it seems to do intelligent things with magical power. However, once the secret of how the artifact was able to perform is revealed, all of a sudden, the sense of "intelligence" is diminished. It's almost like magician tricks. Once you know he hid the rabbit in the hat beforehand, it's not so impressive anymore.

For example, if you present a music box to a person from 1000 years ago, he would believe the box is magical (in a sense, "intelligent"), but if you let him take it apart and investigate further, he will for sure change his mind. The same applies to modern day artifacts. A car that parallel parks itself might make you go "wow" and appreciate the power of "artificial intelligence". However once I explain to you that the computer only issued simply driving commands following simple if-then rules based on sensor data, all of a sudden, the car doesn't seem so "intelligent" to you anymore.

This is a very interesting phenomenon, and is also a great challenge/motivation for AI researchers. It seems that all AI problems solved are no longer AI problems, because we already know the secrets/algorithms behind, and they are simply procedures to follow and conditions to check. Only AI problems unsolved still have the mysterious "intelligence" we have to identify and create.

This explains why AI is an evolving concepts. A purely mechanical device would have been considered "AI" in ancient days, but definitely not in present time. Maybe one day all the electronic computer related products "cease" to be "AI" and only biotechnology with cells acting as computers will be.

It's also worth noting that "AI" is almost everywhere in our everyday lives in almost every field you can think of. When you sit in your office, your PDA "intelligently" remind you of your appointments; your computer "intelligently" auto-fills text for you as you type, and the word processor "intelligently" point out the typos and syntax errors you made. When you pick up the phone to call for some kind of service, the computer telephone agent "intelligently" routes you to different departments based on your needs (or even handle it for you). When you drive on the street, the street lights "intelligently" change based on the traffic flow, the security cameras "intelligently" track down unusual behaviors (and "intelligently" take a picture of your license plate if you run the red light). When you sit comfortably in front of your home computer, whether browsing the Internet or do online shopping, the web site will "intelligently" recommend stories or products tailored specifically to you. Even your air conditioning system "intelligently" adjust the temperature for you while your sprinkler system "intelligently" kicks on to performs its routine task. I guess one great part about this is the amplitude of job prospects! ;)

Despite so many confusion and ambiguity, AI remains the "field I would most like to be in" by scientists in many disciplines. As Russell and Norvig described it: "A student in physics might reasonable feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest. AI, on the other hand, still has openings for several full-time Einsteins.

Now have I convinced you to become an AI researcher just like me? :)

Additional Information:
  • John McCarthy - Cognitive Scientist. 1971 Turing Award winner. Inventor of the Lisp programming language.

  • Marvin Minsky - Cognitive Scientist. Co-founder of MIT's AI lab.

  • Claude Shannon - Electronic Engineer and Mathematician. "Father of Information Theory."

  • Nathaniel Rochester - Computer Scientist. Invented IBM 701 and wrote first assembler.

  • Arthur Samuel - Pioneer in computer gaming. The Samuel Checkers-playing Program appears to be the world's first self-learning program.

  • Ray Solomonoff - Invented the concept of algorithmic probability around 1960.

  • Oliver Selfridge - "Father of Machine Perception."

  • Allen Newell - Researcher in computer science and cognitive psychology. 1975 Turing Award winner. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theory Machine (1956) and the General Problem Solver (1957) (with Herbert Simon).

  • Herbert Simon - Political Scientist. One of the most influential social scientists of the 20th century. 1975 Turing Award winner. 1978 Nobel Prize in Economics winner.




Intelligence is "in the eye of the beholder" -- Rodney Brooks

Friday, June 27, 2008

AI and Robots: What is AI (1)

This is a make up post!

The image you see here is from the famous chess match between Garry Kasparov, a world champion, and Deep Blue, a super computer built by IBM, that took place in 1997. The computer was able to "beat" the world champion. Despite the argument that this match was not "fair" to Kasparov, at least there's one thing we could all agree: Deep Blue certainly showed signs of Artificial Intelligence. But, what really is Artificial Intelligence? Is it simply intelligence created artificially?

In order to understand what is Artificial Intelligence, we might have to first define intelligence. But what is intelligence? Do the bacterias on the keyboard I am typing on have intelligence? Are the rose bushes in my garden (which adds more to my yard work load) intelligent? How about the butterfly sitting on the rose pedal? How about the planet called earth we all live on? Different people might have different answers. I don't want to get too philosophical here, so I'll simply define intelligence as the ability to reason and learn. I know this is still rather vague. Under my definition, all those things I mentioned above could still be categorized as intelligent beings. If you are still not satisfied, you are also welcome to read and modify the wikipedia page on intelligence.

So how should we define Artificial Intelligence then? Again, many people would give you very different definitions. Russel and Norvig summarized all the different definitions into four categories in the book "Artificial Intelligence: A Modern Approach":





Systems that think like humansSystems that think rationally
Systems that act like humansSystems that act rationally

The top ones focus on the ability to reason while the bottom ones emphasize on behavior. The left ones relate AI to human performance while the right ones only measure rationality. All these categories have their merits, but I personally lean toward the last one: Systems that act rationally.

When we try to create Artificial Intelligence, it is easy to try to model after human beings. Why? Because first of all, human beings are intelligent beings (arguably, there are stupid people too). Furthermore, we certainly understand ourselves easier than say, the white mice used in scientific experiments. We try to understand our reasoning and logic behind our thinking and behaviors, and then try to apply the same kind of ideas to an AI agent and have the agent mimic us. There are certainly still many things we don't know about ourselves, and the research in AI is actually a great way to try to understand ourselves better (individually or socially).

It is necessary here to mention the famous "Turing Test". Alan Turing (shown in the picture on the right) is often considered to be the father of modern computer science. The "Turing Award" named after him is considered the Nobel Prize in computing. In a 1950 paper he proposed an operational definition of measuring artificial intelligence: The computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or not (Russell & Norvig). You can think of this in terms of a chat window. If you think who you are chatting with is human, but the other entity chatting with you is in fact a computer program, then this program would have passed the "Turing Test". (You can check out this chatbot if you know a little bit of Chinese.) Interestingly, some "flirting chatbots" are reported fooling lonely Russians into giving out their financial information. Can we say these AI agents passed the "Turing Test"?

There are other intelligent species on earth too, and many times we learn from them because they might do better in certain areas. Many AI researchers also get inspired by biological beings and develop AI algorithms accordingly to solve problems related to human. In my opinion, AI is really the study and expansion of human intelligence.

However, human also do stupid things. We pollute the world we live in, we destroy forests, and people get killed in wars or genocides. Sometimes we are also irrational; we let emotion take over and let that affect our judgment. Therefore, the rationality approach to the definition of AI has good reasons. So what is thinking and behaving rationally? Let me give you two examples and then you can decide yourself.

The first example comes from Russell and Norvig's book. If you see someone you know across the street, you look to the left and look to the right and made sure there is no traffic nearby before proceeding to cross the street. Meanwhile, at 33,000 feet, a cargo door falls off a passing airliner and flattens you before you make it to the other side. Have you acted rationally?

The second example comes from the novel "I, Robot" by Isaac Asimov (made into film in 2004 starring Will Smith). In this story, the robots came to the conclusion that in order to protect human from self-destruction, it is necessary for robots to take over. Have these robots acted rationally?

[To be continued...]


[Quote from the I, Robot movie (2004)]

Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a... canvas into a beautiful masterpiece?
Sonny: Can *you*?