Traditionally, AI researchers believed that perception (sensor data) about the world feeds into cognition (the brain of the agent), and then cognition would issue commands for action (actuator) that will affect the world. This seemed to be the right model that mimic human behaviors. However, in mid-1980s, a then a junior faculty member at Stanford, Rodney Brooks, proposed a different model in which perception directly interacted with action, and cognition simply observes perception and action. He further followed this idea and started a new branch of AI called Behavior-based Robotics.
Rodney believes the path to creating intelligent creatures is only by building actual physical creatures that respond to the complexity of the environment in which they must navigate, and all the power of intelligence arose from the coupling of perception and actuation systems (even just the simple "muscle reflexes" ). The robot shown on the left is an example. It was able to balance, walk and prowl with simple reactive controls (in another word, it doesn't have a brain). It seemed that intelligent, complex behaviors emerged out of simple reactions. Later biology research was able to confirm that the balancing skills of cats come directly from the spinal cord, instead of the brain. As Rodney describes it, intelligence is "in the eye of the beholder".
For example, we all know that a sunflower always turns its "head" toward the sun (Heliotropism ). This appeared to be somewhat of an "intelligent" behavior. However, such behavior is simply the result of chemical reaction. With the increase of potassium ions the osmotic potential in the pulvinus cells becomes more negative and the cells absorb more water and elongate, turning the face of the flower to the sun.
If you are still not convinced, here's another exmaple (try at your own risk). Gently touch a burning stove top with your finger and then observer. If you ponder it carefully, you might realize that your hand moved away from the burning stove top before you actually felt the burning sensation. The seemingly "intelligent" behavior of moving your hand away happened before your brain could even sense the pain, so how could it issue a command to retract your hand? The answer is that the command didn't come from your brain at all.
That's why I tend to lean toward the last definition of artificial intelligence: systems that act rationally. It doesn't matter much how intelligence was achieved; what matters the most is that I made myself believe the agent displayed intelligent behaviors.
So where did the name "artificial intelligence" come from? In 1956 (probably the most important year in the history of AI), John McCarthy (shown in the picture on the right), a researcher at Dartmouth College then, convinced Marvin Minsky, Claude Shannon and Nathaniel Rochester to help him organize a two-month workshop at Dartmouth in the summer of 1956. There were 10 attendees in all, including Tranchard More, Arthur Samuel, Ray Solomonoff, Oliver Selfridge, Allen Newell, and Herbert Simon. The workshop didn't lead to any new breakthorughts, but it did introduce all the major figures to each other and for the next 20 years, the field would be dominated by these people and their students and colleagues. The most important thing that came out of the workshop was an agreement to adopt McCarthy's new name for the field: "artificial intelligence".
The term "Artificial Intelligence" is an oxymoron. How could something "artificial" be "intelligent"? That's why for most people, there's always a mythical component to it. An artifact appears "intelligent" because it seems to do intelligent things with magical power. However, once the secret of how the artifact was able to perform is revealed, all of a sudden, the sense of "intelligence" is diminished. It's almost like magician tricks. Once you know he hid the rabbit in the hat beforehand, it's not so impressive anymore.
For example, if you present a music box to a person from 1000 years ago, he would believe the box is magical (in a sense, "intelligent"), but if you let him take it apart and investigate further, he will for sure change his mind. The same applies to modern day artifacts. A car that parallel parks itself might make you go "wow" and appreciate the power of "artificial intelligence". However once I explain to you that the computer only issued simply driving commands following simple if-then rules based on sensor data, all of a sudden, the car doesn't seem so "intelligent" to you anymore.
This is a very interesting phenomenon, and is also a great challenge/motivation for AI researchers. It seems that all AI problems solved are no longer AI problems, because we already know the secrets/algorithms behind, and they are simply procedures to follow and conditions to check. Only AI problems unsolved still have the mysterious "intelligence" we have to identify and create.
This explains why AI is an evolving concepts. A purely mechanical device would have been considered "AI" in ancient days, but definitely not in present time. Maybe one day all the electronic computer related products "cease" to be "AI" and only biotechnology with cells acting as computers will be.
It's also worth noting that "AI" is almost everywhere in our everyday lives in almost every field you can think of. When you sit in your office, your PDA "intelligently" remind you of your appointments; your computer "intelligently" auto-fills text for you as you type, and the word processor "intelligently" point out the typos and syntax errors you made. When you pick up the phone to call for some kind of service, the computer telephone agent "intelligently" routes you to different departments based on your needs (or even handle it for you). When you drive on the street, the street lights "intelligently" change based on the traffic flow, the security cameras "intelligently" track down unusual behaviors (and "intelligently" take a picture of your license plate if you run the red light). When you sit comfortably in front of your home computer, whether browsing the Internet or do online shopping, the web site will "intelligently" recommend stories or products tailored specifically to you. Even your air conditioning system "intelligently" adjust the temperature for you while your sprinkler system "intelligently" kicks on to performs its routine task. I guess one great part about this is the amplitude of job prospects! ;)
Despite so many confusion and ambiguity, AI remains the "field I would most like to be in" by scientists in many disciplines. As Russell and Norvig described it: "A student in physics might reasonable feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest. AI, on the other hand, still has openings for several full-time Einsteins.
Now have I convinced you to become an AI researcher just like me? :)
- John McCarthy - Cognitive Scientist. 1971 Turing Award winner. Inventor of the Lisp programming language.
- Marvin Minsky - Cognitive Scientist. Co-founder of MIT's AI lab.
- Claude Shannon - Electronic Engineer and Mathematician. "Father of Information Theory."
- Nathaniel Rochester - Computer Scientist. Invented IBM 701 and wrote first assembler.
- Arthur Samuel - Pioneer in computer gaming. The Samuel Checkers-playing Program appears to be the world's first self-learning program.
- Ray Solomonoff - Invented the concept of algorithmic probability around 1960.
- Oliver Selfridge - "Father of Machine Perception."
- Allen Newell - Researcher in computer science and cognitive psychology. 1975 Turing Award winner. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theory Machine (1956) and the General Problem Solver (1957) (with Herbert Simon).
- Herbert Simon - Political Scientist. One of the most influential social scientists of the 20th century. 1975 Turing Award winner. 1978 Nobel Prize in Economics winner.
Intelligence is "in the eye of the beholder" -- Rodney Brooks