Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Friday, April 10, 2009

How to find all the modes of a 3D probability distribution surface

A 3D probability distribution surface can represent the likelihood of certain events in a specific region where a higher point on the surface could mean it is more likely for the event to happen. For example, a 3D probability distribution surface created for a Wilderness Search and Rescue (WiSAR) operation, whether systematically or manually or with a hybrid, can show the searchers areas where it is more likely to find the missing person. The distribution map can be used to better allocate search resources and to generate flight paths for an Unmanned Aerial Vehicle (UAV).
An example 3D probability distribution surface
Because different path-planning algorithms may be better suited for different probability distributions (I appeal to the No-Free-Lunch theorem), identifying the type of distribution beforehand can help us decide what algorithm to use for the path-planning task. In our decision process, we particularly care about how many modes the probability distribution has. So how can we automatically identify all the modes in a 3D probability distribution surface? Here I'll describe the algorithm we used.

In our case, the 3D probability distribution surface is represented by a matrix/table where each value represents the height of the point. You can think of this distribution as a gray-scale image where the gray value of each pixel represent the height of the point. And we use a Local Hill Climbing type algorithm with 8-connected neighbors.

1. Down sample the distribution
If the distribution map is very large, it might be a good idea to down sample the distribution to improve algorithm speed. We assume the surface is noise-free. If the surface is noisy, we can also smooth it with a Gaussian filter (think image processing).

2. Check for a uniform distribution (a flat surface)
It is a good idea to check if the probability distribution is a uniform distribution. Just check to see if all values in the matrix are identical. If a uniform distribution is identified, we know the distribution has 0 mode and we are done.

3. Local Hill Climbing with Memory
Start from the a point of the surface and then check its neighbors (8-connected). As soon as a neighbor with the same or better value is found, we "climb" to that point. The process is repeated until we reach a point (hilltop) where all neighbors have smaller values. As we "climb" and check neighbors, we mark all the points we visited along the way. And when we check neighbors, we only check points we have not visited before. This way we avoid finding a mode we had found before. Once we find a "mode", we can start from another unvisited point on the surface and do another Local Hill Climbing. Here I use quotes around the word mode because we are not sure if the "mode" we found is a real mode.

4. Make sure the "mode" we found is a real mode
An Even-Height Great Wall
The "mode" we found using Local Hill Climbing might not actually be a real mode. It might be right next to a mode previously found and have a lower value (because we only checked unvisited neighbors in the previous step). It might also be part of another flat-surface mode where the mode consists of multiple points with identical values (think of a hilltop that looks like a plateau or think of a ridge). Things get even more complicated with special distributions such as this one on the right. And the "mode" point we found might be connected to a previously found mode through other points with the same value (e.g, the "mode" point is the end point of the short branch in the middle of the image.

Therefore, we need to keep track of all points leading to the final "mode" point that have identical values and check all the visited neighbors of these points, making sure this flat surface is not part of a previously found mode. If these points make up a real new mode, we mark these points with a unique mode count id (e.g, mode 3). If they are only part of a previous found mode, we mark these points so (e.g., mode 2). If one of them is right next to a previously found mode but have lower value, we mark these points as non-mode points. This step is almost like performing a Connected-Component Labeling operation in Computer Vision.

At the end of the algorithm run, we will have a count of how many modes the probability distribution has and also a map with all the mode points marked. With the Even-Height Great Wall distribution, the map would look just like the image (white pixels marking mode points) with 1 mode. And within Milli-seconds, the algorithm can identify the 4 modes in the example 3D surface above.

That's it! If you ever need to do this for your projects, you now know how!








Recursive functions work great for local hill climbing until you get a stack overflow.

Thursday, April 09, 2009

AI Robot Related Conferences and Journals For My Research (Part 3)

AI Robot Related Conferences and Journals For My Research Part 2

Top Conferences
==================================================================

IROS -- IEEE/RSJ International Conference on Intelligent Robots and Systems

IROS is a multi-track conference held annually. It is a premiere international conference in robotics and intelligent systems, bringing together an international community of researchers, educators and practitioners in the field to discuss the latest advancements in robotics and intelligent systems.

Every year, thousands of people all over the world attend the IROS conference and a large number of papers get published in the conference. For example, the IROS 2011 conference received 2541 papers and proposals and accepted 790 papers, with a 32% acceptance rate. However, the acceptance rate for IROS is normally much higher at around 55%. Every year the IROS has a different theme. The theme for IROS 2011 is Human-Centered Robotics, and the theme for IROS 2012 is Robotics for Quality of Life and Sustainable Development. However, people generally ignore the theme and submit whatever they have. I have been fortunate enough to attend the IROS conference in 2009 and published a paper on UAV path planning there.

The next IROS conference IROS 2011 will be held at San Francisco, California, USA.
Conference Dates: September 25-30, 2011
Submission Deadline: March 28, 2011

The next IROS conference you can submit a paper to is IROS 2012. It will be held at Vilamoura, Algarve, Portual.
Conference Dates: October 7-10, 2012
Submission Deadline: March 10, 2012



ICRA -- IEEE International Conference on Robotics and Automation

ICRA is a multi-track premiere conference in the robotics community held annually. It is in the same league with IROS and is also a major international conference with large attendance. The ICRA 2011 conference held in Shanghai, China welcomed more than 1,500 people around the world. The acceptance rate for ICRA is about 45%.

ICRA also has yearly themes. The ICRA 2011 conference's theme was "Better Robots, Better Life". The ICRA 2012 conference theme will be "Robots and Automation: Innovation for Tomorrow's Needs". Again, if you are thinking about submitting something to ICRA, don't worry about the themes. Just submit what you have on whatever topic, as long as it is related to robots or automation.

I have submitted a paper to ICRA before, but very unfortunately, the paper fell into the hands of several electrical engineer reviewers because I picked the wrong key words. They seem to hold grudges against computer science researchers. The same paper was accepted at IROS without any major revision. It is likely that I'll be submitting to ICRA again in the future, but I will be super careful about what key words to use this time!!

The next ICRA conference ICRA 2012 will be held at St. Paul, Minnesota, USA.
Conference Dates: May 14-18, 2012
Submission Deadline: September 16, 2011

AI Robot Related Conferences and Journals For My Research Part 4


Video of the Day:

This is why the back of my jersey has Messi's name on it!

Wednesday, April 08, 2009

Robot of the Day: MABEL, the Two-Legged Running Robot

MABEL (not an acronym) is the name of a bipedal "humanoid" robot created by researchers at University of Michigan. It just got its fame recently because it could run in a human-like gait at speeds up to 3.06 meters/second. That is 6.8 miles per hour. That is the world record for a bipedal robot with knees.

MABEL was originally built in collaboration with Jonathan Hurst, then a doctor student at the Robotics Institute at Carnegie Mellon University. Then researchers at U-M spent years improving the feed-back system in MABEL's training. MABEL was intentionally built to look like a human, with a heavier torso and light flexible legs. When it is running, MABEL is in the air for 40 percent of each stride, which is almost like a real runner. The robot can self balance in real time with a closed-loop control, and switch gaits as commanded autonomously. It can even transition from completely flat surface to uneven grounds. The video below shows MABEL running.


The researchers envision that two-legged robots can travel over rough terrains and function better in places built for humans. They can be used to enable wheelchair-bound people to walk again or to be used for robot rescuers that can step over small obstacles. Biped robots certainly have the advantage over wheeled robots when it comes to bumpy surface or stairs, however one important factor is that biped robots look more human-like compared to say a three-legged or 6-legged robot. The truth is that many four-legged animals run much faster than us two-legged human, and multiple-legged insects handle uneven surface much better than two legged ones. But would you rather have a two-legged humanoid robot serving you a drink or an eight-legged spider-looking one? Actually even the same MABEL robot walking backward looks more like a bird and seems weird (see video below).


I could envision multiple-legged robots to be very useful in space colonization. Most likely the surface terrain at another planet would not be flat, and the multiple-legged robots could easily transport goods for human and explore the planet surface more efficiently.

Back to the MABEL robot. It's hard to describe the feeling when I watched the robot running with large strides. There's certainly some uncanny valley effects there, but there's something beautiful about the strides, because they looked natural. Now we just have to put a soccer ball in front of it and then teach it to dribble and shoot...

To find out more about the MABEL robot, visit the researcher's project page here.


Video of the Day:

An interesting Logitech commercial showing a biped humanoid blending in the human environment. Enjoy!

Tuesday, April 07, 2009

AI Robot Related Conferences and Journals For My Research (Part 2)

AI Robot Related Conferences and Journals For My Research Part 1

Top Conferences
==================================================================

HRI -- ACM/IEEE International Conference on Human-Robot Interaction

HRI is a single-track, highly selective annual international conference that seeks to showcase the very best interdisciplinary and multidisciplinary research in human-robot interaction with roots in social psychology, cognitive science, HCI, human factors, artificial intelligence, robotics, organizational behavior, anthropology and many more. HRI is a relatively new and small conference because the HRI field is relatively new. The 1st HRI conference was actually held here in Salt Lake City, Utah, in March 2006, and my advisor, Dr. Michael A. Goodrich, was the General Chair for the conference. It is very unfortunate that I only started grad school two months after the 1st HRI conference and missed this great opportunity. *Sigh* HRI has been growing rapidly and gaining attentions from many research groups and institutions. The last HRI conference (6th) had attendance exceeding 300 participants. HRI is also a top-tier conference with an acceptance rate between 19% and 25%. As the conference becomes more and more popular, researchers from many research disciplines (e.g., human factors, cognitive science, psychology, linguistics, etc.) began participating in the conference.

The venue of the HRI conference rotates among North America, Europe, and Asia. I have been lucky enough to attend the conference twice, once in 2010 and once in 2011. In 2010, I attended the HRI Young Pioneer Workshop. The workshop is a great event because you not only get to make friends with a bunch of young researchers in the HRI field before the conference starts, you also get to see what other young researchers are working on. Besides, NSF is generous enough to cover a good portion of the airfare, which is great help for poor grad students. I liked the workshop so much that I joined the organizing committee for the next year's HRI Young Pioneer Workshop, and also hosted the panel discussion at the workshop. That was also the reason why I was able to attend HRI 2011. Also in both HRI 2010 and HRI 2011, I guarded my advisor's late-breaking report poster sessions because he couldn't make it.

I have never submitted anything to the main HRI conference. Since this is the top conference in my research field, I'd like to publish something before I graduate.

The next HRI conference HRI 2012 (the 7th) will be held at Boston, Massachusetts, USA.
Conference Dates: March 5-8, 2012
Submission Deadline: September 9, 2011



CHI -- ACM/SIGCHI Human Factors in Computing Systems - the CHI Conference

CHI is considered the most prestigious conference in the field of human-computer interaction.It is a multi-track conference held annually. Because of the heavy interests and involvement from industry leaders, large tech companies such as Microsoft, Google, and Apple are frequent participants and organizers of the conference. CHI is a top-tier conference with acceptance rate between 20% and 25%.

Human-Computer Interaction is a broad field that includes both software and hardware. The goal of HCI is to develop methods, interfaces, interaction techniques to make computer devices more usable and receptive to the user's needs. These days computer devices could include a wide variety of things such as cell phones, tablets, game consoles, or gadgets. Thanks to the advancement of sensor technologies, a whole set of rich interaction techniques have emerged to work with gestures, device orientations, and motion of the device.

Many of the HCI design principles, interface designs, and interaction techniques are relevant in Human-Robot Interaction. After all, a robot must have some kind of computer embedded (whether tiny or full-size, whether one or multiple). In many HRI tasks, the human user could very well be interacting with the robot through a regular computer or a touch screen device (think tele-presence, for example). I have never attended the CHI conference before, but I have heard a lot about it from Dr. Dan Olsen at BYU because he was always some kind of chair in the CHI organizing committee. In fact, he'll be the paper chair in the next CHI conference.

The next CHI conference CHI 2012 will be held at Austin, Texas, USA.
Conference Dates: May 5-10, 2012
Submission Deadline: September 23, 2011

AI Robot Related Conferences and Journals For My Research Part 3





Every time when you clip your finger nails, think what you have achieved since you last clipped your finger nail. If you can remember what you did, then you have not wasted your life.



Monday, April 06, 2009

Seven Weapons - Longevity Sword: Chapter 1 (3)

THREE

Miao Shaotian walked at the very end, gripping the pair of golden rings tightly in his hands, so tight that the blue veins on the back of his hands almost popped out.
He shouldn’t have come, but he must.
That merchandise seemed to be emitting a strange field of attraction, sucking him close one step after another. He was not going to give it up until the last moment.
Two statue-like guards stood at the entrance of the underground tunnel. Then for every dozen steps forward, two more guards stood along the way, their faces as grim as the green stones in the walls.
A rampant green dragon was carved onto the stone walls.
It was said that the Green Dragon Clan had three hundred and sixty-five secret branches. This was undoubtedly one of them.
At the end of the underground tunnel stood a gate made of very thick iron railings.
Gongsun Jing took out a large chain of keys from his waist band and then opened three locks with three of them. Only then did the two guards behind the iron bars pulled the gate open.
But this was still not the last gate.
“I know many people can get in here. The guards here are not difficult to deal with. But from here onward would prove to be an arduous task,” Gongsun Jing explained.
“Why’s that?” Young Master Zhu asked.
“Between here and that stone door over there, there are a total of thirteen hidden traps. I can guarantee that there are no more than seven people in the entire world who could successfully get through all thirteen traps.”
“Luckily I am definitely not one of those seven people,” Young Master Zhu heaved a sigh.
“Why don’t you give it a try?” Gongsun Jing smiled even more politely.
“Perhaps I’ll give it a try at a later time, but not right now,” Young Master Zhu said.
“Why not now?” asked Gongsun Jing.
“Because I am perfectly happy staying alive,” replied Young Master Zhu.
The distance between the iron bars and the stone door was actually not far, but after hearing Gongsun Jing’s words, the path seemed to be ten times farther, and the stone door seemed to be even heavier.
Gongsun Jing used another three keys to unlock the door.
Behind the two-foot thick stone door was a nine-foot wide stone cell.
The room was ghastly and chilly as if it were the center of an ancient emperor’s tomb, only that a giant iron chest sat at the spot instead of the coffin.
Opening the iron chest of course required another three keys, but that was not the end of it because there was a small iron chest inside the giant one.
“Such maximum security perhaps deserves some higher prices from us,” Young Master Zhu said with a sigh.
“Young Master Zhu is very clever indeed,” said Gongsun Jing with a big smile.
Taking out the small iron chest, he unlocked it and opened the lid, but all of a sudden, his affable smile disappeared and his face looked as if someone had just shoved a rotten tomato down his throat.
The small iron chest turned out to be empty except a single piece of paper.
The paper only showed nine words, “Thank you! You are such a nice guy!”
Now support the translator Lanny by following my blog and leaving comments! :)

Video of the Day:

Excerpt from the Shanghai World Expo Closing Ceremony Concert - Fusion of Art and Music. Enjoy!

Sunday, April 05, 2009

Predictive Policing and Wilderness Search and Rescue -- Can AI Help?

Santa Cruz police officers arresting a woman at a location
flagged by a computer program as high risk for car burglaries.
(Photo Credit: ERICA GOODE)
I came across an article recently talking about how Santa Cruz police department has been testing a new method where they use computer programs to predict when and where crimes are likely to happen and then send cops to that area for proactive policing. Although the movie "Minority Report" by Tom Cruise immediately came to my mind, that was actually not the same. The computer program was developed by a group of researchers consisting of two mathematicians, an anthropologist, and a criminologist. The program uses a mathematical model to read in crime data in the same area for the past 8 years and then predict time and location of areas with high probability for certain type of crimes. The program apparently can read in new data daily. This kind of program is attracting interests from law enforcement agencies because the agencies are getting a lot more calls for service when the number of staff is much less due to poor economy. This requires them to deploy resources more effectively. The article did not disclose much detail about the mathematical model (because it is a news article, not a research paper, duh), but it is probably safe to assume the model tries to identify patterns from past crime data and then assign probabilities to each grid cell (500 feet by 500 feet).

A multi-modal probability distribution predicting likely places
to find the missing person and a UAV path generated by algorithm.
I found this article especially interesting because in my research I am solving a similar problem with a similar approach. My research focuses on how an Unmanned Aerial Vehicle (UAV) can be used more efficiently and effectively by searchers and rescuers in wilderness search and rescue operations. One part of the problem is trying to predict where are likely places the lost person might be found. Another part of the problem is to generate an efficient path for the UAV so it will cover those high probability areas well with limited flight time. I've also developed a mathematical model (a Bayesian approach) that uses terrain features to predict the lost person's movement behavior, and also incorporate human behavior patterns from past data in the form of GPS track logs.

In both cases, the problems arise because of limited resources.

It is very important to remember that no one has the real crystal ball, so predictions can not be 100% correct. Also the prediction is in the form of a probability distribution, meaning in the long run (with many cases), the predictions are likely correct a good percentage of the time, but for each individual cases, the prediction could very possibly be wrong. This applies to both predictive policing and wilderness search and rescue.

Another important question to ask is how do you know your model is good and useful. This is difficult to answer because, again, we don't know the future. It is possible to pretend part of our past data is actually from "the future," but there are many metrics, what if the model performed well with respect to one metric, but performed terribly with respect to another metric? Which metric to use might be related to the individual case. For example, should number of arrests be used to measure the effectiveness of the system? Maybe by sending police officers to certain areas would scare off criminals and actually result in a reduction in number of arrests.

The predictive policing problem probably holds an advantage over the wilderness search and rescue problem because a lot more crimes are committed than people getting lost in the wilderness resulting in a much richer dataset. Also path planning for police offices is a multiagent problem while we only give the searchers and rescuers one UAV.

One problem with such predictive systems is that users might grow to fully rely on the system. This is an issue of Trust in Automation. Under-trust might waste resources, but over-trust might also lead to bad consequences. One thing to remember is that no matter how complicated the mathematical model is, it is still a simplified version of the reality. Therefore, calibrating the user's trust becomes an important issue and the user really need to know the strength of the AI system and also the weakness of the AI system. The product of the AI system should be complementary to the user to reduce the user's work and remind the user places that might be overlooked. The user also should be able to incorporate his/her domain knowledge/experience into the AI system to manage the autonomy. In my research, I am actually designing tools that allow users to take advantage of their expertise and manage autonomy at three different scales. I'll probably talk more about that in another blog post.

Anyway, it's good to see Artificial Intelligence used in yet another real life applications!







You shall not carry a brass knuckle in Texas because it is considered an illegal weapon (but in California you'll be just fine). Don't you love the complication of the US legal system, which by the way, serves big corporations really well.





Saturday, April 04, 2009

Robot of the Day: Mars Pathfinder and the Sojourner Rover

On July 4th, 1997, which also happened to be the Independence Day of the United States, the Mars Pathfinder successfully landed on Mars. It made history because it was:
  • The third lander (since the two Vikings) successfully landing on Mars.
  • The first time a bouncing air bag landing mechanism was used for a lander.
  • The first time a robot rover was successfully deployed.
  • The first time a space mission was broadcasted on the Internet live.
After the successful landing, images of the mysterious red planet from the planet surface was broadcasted "live" on the Internet. This event had profound and extraordinary impacts on the public interests in space exploration, robotics technology, and web technologies, and inspired a generation of potential roboticists.

The Mars Pathfinder consisted of a lander and lightweight wheeled robotic rover named Sojourner (named after a a nineteenth-century black feminist and campaigner for the abolition of slavery). It was wrapped in large airbags. After entering the Martian atmosphere, a parachute was first deployed to slow down the falling of the capsule. Then a self-inflating airbag system in the shape of a tetrahedral was released, which "soft" landed on the terrain surface of Mars and rolled and bounced up and down all over the place. After the tetrahedral finally stopped rolling, the airbags were deflated and the lander unfolded itself, letting lose of the robotic rover. It is simply mind-boggling to see how the lander and the rover survived such vigorous movements, especially when one would have expected the scientific equipments on board to be very delicate devices. The video below shows some animations and footage of the landing process.


The main objective of the mission was to demonstrate it is possible to perform extraterrestrial exploration with low cost. As added benefit, the Mars Pathfinder also conducted some scientific experiments with a cameras, atmospheric structure instruments, and a spectrometer on the rover. The rover had six independently-controlled wheels and performed rock analysis as it roved about not far from the lander. The video below shows some footage of the rover moving about.


Roughly three months later, the mission control lost contact with the Pathfinder, but the mission had exceeded its goals just during the first month. Although still visible from Mars Reconnaissance Orbiter up high in the Martian sky, the robot (system) had become fully autonomous and just wondered about like a lonely ghost. Just like its name suggests, it had finally broken free from its human masters and became a free, uh, robot!

When I interned at NASA Ames in California in 2009, I was very fortunate to spot a prototype of the Sojourner Rover at the Intelligent Robotics Group (see pic on the left). I am strong believer in space colonization because we must "spread the seeds of human civilization" before we totally destroy our planet earth. And to make space colonization possible, we totally need robots that can build habitats for us. I wish the government would spend more on robotics and space exploration instead of sending troops to other countries to torture their citizens under the name of spreading "democracy" and "freedom".


Anyway, if you want to find out more about the Mars Pathfinder, you can watch "The Pathfinders" Documentary on YouTube.



Picture of the Day:

Photo of a meteor taken by astronaut from the International Space Station.

Friday, April 03, 2009

AI Robot Related Conferences and Journals For My Research (Part 1)

Since my dissertation will be a paper-based dissertation, I need to publish a bunch of papers. My advisor has asked me to think about a schedule and a plan for where to submit my papers. There are many AI robot related conferences and journals out there. However, only some of them are quality ones. In this blog post I'll list some of the top ones, discuss what each conference is about, and identify paper submission deadlines. So if you are also thinking about publishing papers in the AI robot field, look no further. I've already done the homework for you.

Top Conferences
==================================================================

AAAI -- Association for the Advancement of Artificial Intelligence

AAAI is a top-tier multi-track conference held yearly. It is a prestigious conference with an acceptance rate roughly between 25% and 30%. A very wide range of AI topics are covered at the conference including multi-agent systems, machine learning, computer vision, knowledge representation and reasoning, natural language processing, search and planning, integrated intelligence, robotics, and etc. The conference also includes many tutorials, workshops, consortium, exhibitions, and competitions.

The AAAI conference is devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. It also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.

I have been fortunate enough to attend the AAAI conference twice (2007 in Vancouver, BC and 2010 in Atlanta, GA, USA) and published a paper at the 2010 conference under the Integrated Intelligence track. I was also invited to present a poster for the same paper. In that same conference, I watched and blogged about the Robot Chess Competition.

The next AAAI conference AAAI-12 (the 26th) will be held at Toronto, Ontario, Canada.
Conference Dates: July 22-26, 2012
Submission Deadline: February 8, 2012 (roughly)


IJCAI -- International Joint Conference on Artificial Intelligence

IJCAI is also a top-tier multi-track conference held biennially in odd-numbered years. The acceptance rate for this conference is roughly between 20% and 26%, which makes the it even more selective than many AI journals. It also covers a wide range of AI topics such as multiagent systems, uncertainly in AI, and robotics and vision. One difference between AAAI and IJCAI is that IJCAI is more of a real international conference with paper submissions from all over the world.

One thing great about this conference is that all proceedings of the papers are free to all from their web sites. They also provide video recordings of each session at the conference, so you don't have to be present at the conference and can still watch all the presentations as if you were really there. This is very rare for AI robotics conferences as far as I know and it is a wonderful service they are providing!!

I have not had a chance to submit anything to IJCAI. If possible, I'd really like to give it a try for the next one, and if you read the next line, you'll know why.

The next IJCAI conference IJCAI-13 (the 23rd) will be held at Beijing, China.
Conference Dates: August 3-9, 2013
Submission Deadline: January 25, 2013 (roughly)

AI Robot Related Conferences and Journals For My Research Part 2


Picture of the Day:

We found this very friendly black poodle wandering on the street, so we took her home while we looked for her owner. She was such a wonderful little thing and our entire family loved her. Luckily the owner saw our posts online and contacted us. My daughter was very sad to see her go, but we are very happy that she could finally go home! So if you have a pet, make sure you have a tag with your address and phone number. Could be real handy at times!

Thursday, April 02, 2009

Thrun and Novig Offer Free "Intro to AI" Course at Stanford

If you have studied Artificial Intelligence before, you probably already know who Thrun and Novig are. For those who have no idea, let me first introduce the two prominent researchers in the AI community.

Sebastian Thrun is a Computer Science professor at Stanford University. He also works for Google at the same time. His most famous achievement was to lead the team that won the 2005 DARPA Grand Challenge, a competition where unmanned cars had to cross the Nevada desert fully autonomously. He also helped develop the Google self-driving car, which has been secretly driving all over California freeways and local streets (I am not really sure if this is legal of not). He is also the co-author of a wonderful book, Probabilistic Robotics, which should be, in my opinion, a must read for robotics researchers. I have had the pleasure of meeting him briefly at a AAAI conference.

Peter Norvig, who had worked for Sun and NASA before, is now director of research at Google. He is the co-author of the most famous AI textbook, Artificial Intelligence: A Modern Approach, a book any AI researcher should have in his/her collection. This book was also my textbook when I took the Intro to AI course at BYU. I was fortunate enough to attend a presentation Dr. Norvig gave at Carnegie Melon Ames Campus and asked him many questions (maybe too many). His current research interest lies in data-driver approaches to solving AI problems, which should comes to no surprise since he works for Google.

So here's the great news!! The two will be teaching the course Introduction to Artificial Intelligence at Stanford during the Fall Semester of 2011 and have decided to open the course to everyone for free. This means not only all the course materials will be publicly available, including videos of the course in 15 minute chunks for your convenience (lecture runs 75 minutes long), you can also do all the homework/course assignments/quizzes and take exams just like a real Stanford student. They will be graded and if you pass the course, you get a certificate of completion from the instructors. You can also compare your grade to the grades of the real Stanford students.

Anyone can sign up for the course for free at this web page up to September 10th. Many of my friends have already signed up. The Intro to AI class I took at BYU is one of my favorite classes ever in my entire life. I am sure the one offered at Stanford will be just as fun. I, however, don't plan to do any of the assignments or take any of the exams, :) and will only enjoy the videos of the lectures.


I think it is wonderful that people are offering their teaching and knowledge to the entire world for free, because of their love and passion for the subjects. In the past, MIT has offered free courses online and there is, of course, the famous and wonderful Kahn Academy. If we all contribute a little to the world without thinking about what we get in return, we can make the world a better place every day!

Picture of the Day:

I was at the little restaurant called The Italian Place enjoying lunch when I found this on my computer. Is this because of the Italian connection?

Wednesday, April 01, 2009

Robot of the Day: Clocky, R2-D2-like Alarm Clock on Steroid

Clocky is a robot alarm clock created by Gauri Nanda, a graduate from the MIT Media Lab. Unlike other stationary alarm clocks, this R2-D2 or Droid-like little robot rolls off your nightstand when it's time to wake up, and rolls all over your room while making cute noises. This also means, in order to stop the alarm, you can't just keep hitting the snooze button and have to really get up to chase down Clocky in order to turn it off. What a great idea!! In fact, this idea is so great that after graduation, Nanda started her own company, Nanda Home, to commercialize the product and have already made millions.

This is yet another one of those innovative ideas of using simple robots to solve real world problems. Clocky is not a complicated robot at all. In fact, it doesn't really have any navigational capabilities or make any intelligent decisions. It just runs around randomly, looks cute, and annoys the hell out of you. But you have to admit, it does get the job done. The only possible downside is that you might be so mad after waking up and want to throw it out of the window. Although it's build to withstand falls from your tables or nightstands, but it probably won't survive free fall from anything higher than the second floor. The robot sells for around $49, and you can find many with different colors from places such as Amazon. I must admit, this little robot makes a great gift idea. So if you ever plan to send me a gift...cough...you know...


Last year, the company made Clocky a cousin and named it Tocky. Tocky rolls around like a ball, can also play MP3 files, and is $20 more expensive. So pick the one that's easier to catch for you.


I really wonder if someone will make another robot that will chase Clocky down and shuts it off, so we can go back to our sweet dreams uninterrupted. I can imagine how hard it is to develop such a robot, because you probably have to understand concepts such as Kalman Filter or Particle Filter.






If it has a wiki page, it's worth something! (Now someone creates a wiki page for me please! :)




Video of the Day:

Check out Flying Alarm Clock. At first I thought this is Clocky moving in 3D. But I was wrong!