Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Showing posts with label AI and Robots. Show all posts
Showing posts with label AI and Robots. Show all posts

Friday, April 10, 2009

How to find all the modes of a 3D probability distribution surface

A 3D probability distribution surface can represent the likelihood of certain events in a specific region where a higher point on the surface could mean it is more likely for the event to happen. For example, a 3D probability distribution surface created for a Wilderness Search and Rescue (WiSAR) operation, whether systematically or manually or with a hybrid, can show the searchers areas where it is more likely to find the missing person. The distribution map can be used to better allocate search resources and to generate flight paths for an Unmanned Aerial Vehicle (UAV).
An example 3D probability distribution surface
Because different path-planning algorithms may be better suited for different probability distributions (I appeal to the No-Free-Lunch theorem), identifying the type of distribution beforehand can help us decide what algorithm to use for the path-planning task. In our decision process, we particularly care about how many modes the probability distribution has. So how can we automatically identify all the modes in a 3D probability distribution surface? Here I'll describe the algorithm we used.

In our case, the 3D probability distribution surface is represented by a matrix/table where each value represents the height of the point. You can think of this distribution as a gray-scale image where the gray value of each pixel represent the height of the point. And we use a Local Hill Climbing type algorithm with 8-connected neighbors.

1. Down sample the distribution
If the distribution map is very large, it might be a good idea to down sample the distribution to improve algorithm speed. We assume the surface is noise-free. If the surface is noisy, we can also smooth it with a Gaussian filter (think image processing).

2. Check for a uniform distribution (a flat surface)
It is a good idea to check if the probability distribution is a uniform distribution. Just check to see if all values in the matrix are identical. If a uniform distribution is identified, we know the distribution has 0 mode and we are done.

3. Local Hill Climbing with Memory
Start from the a point of the surface and then check its neighbors (8-connected). As soon as a neighbor with the same or better value is found, we "climb" to that point. The process is repeated until we reach a point (hilltop) where all neighbors have smaller values. As we "climb" and check neighbors, we mark all the points we visited along the way. And when we check neighbors, we only check points we have not visited before. This way we avoid finding a mode we had found before. Once we find a "mode", we can start from another unvisited point on the surface and do another Local Hill Climbing. Here I use quotes around the word mode because we are not sure if the "mode" we found is a real mode.

4. Make sure the "mode" we found is a real mode
An Even-Height Great Wall
The "mode" we found using Local Hill Climbing might not actually be a real mode. It might be right next to a mode previously found and have a lower value (because we only checked unvisited neighbors in the previous step). It might also be part of another flat-surface mode where the mode consists of multiple points with identical values (think of a hilltop that looks like a plateau or think of a ridge). Things get even more complicated with special distributions such as this one on the right. And the "mode" point we found might be connected to a previously found mode through other points with the same value (e.g, the "mode" point is the end point of the short branch in the middle of the image.

Therefore, we need to keep track of all points leading to the final "mode" point that have identical values and check all the visited neighbors of these points, making sure this flat surface is not part of a previously found mode. If these points make up a real new mode, we mark these points with a unique mode count id (e.g, mode 3). If they are only part of a previous found mode, we mark these points so (e.g., mode 2). If one of them is right next to a previously found mode but have lower value, we mark these points as non-mode points. This step is almost like performing a Connected-Component Labeling operation in Computer Vision.

At the end of the algorithm run, we will have a count of how many modes the probability distribution has and also a map with all the mode points marked. With the Even-Height Great Wall distribution, the map would look just like the image (white pixels marking mode points) with 1 mode. And within Milli-seconds, the algorithm can identify the 4 modes in the example 3D surface above.

That's it! If you ever need to do this for your projects, you now know how!








Recursive functions work great for local hill climbing until you get a stack overflow.

Thursday, April 09, 2009

AI Robot Related Conferences and Journals For My Research (Part 3)

AI Robot Related Conferences and Journals For My Research Part 2

Top Conferences
==================================================================

IROS -- IEEE/RSJ International Conference on Intelligent Robots and Systems

IROS is a multi-track conference held annually. It is a premiere international conference in robotics and intelligent systems, bringing together an international community of researchers, educators and practitioners in the field to discuss the latest advancements in robotics and intelligent systems.

Every year, thousands of people all over the world attend the IROS conference and a large number of papers get published in the conference. For example, the IROS 2011 conference received 2541 papers and proposals and accepted 790 papers, with a 32% acceptance rate. However, the acceptance rate for IROS is normally much higher at around 55%. Every year the IROS has a different theme. The theme for IROS 2011 is Human-Centered Robotics, and the theme for IROS 2012 is Robotics for Quality of Life and Sustainable Development. However, people generally ignore the theme and submit whatever they have. I have been fortunate enough to attend the IROS conference in 2009 and published a paper on UAV path planning there.

The next IROS conference IROS 2011 will be held at San Francisco, California, USA.
Conference Dates: September 25-30, 2011
Submission Deadline: March 28, 2011

The next IROS conference you can submit a paper to is IROS 2012. It will be held at Vilamoura, Algarve, Portual.
Conference Dates: October 7-10, 2012
Submission Deadline: March 10, 2012



ICRA -- IEEE International Conference on Robotics and Automation

ICRA is a multi-track premiere conference in the robotics community held annually. It is in the same league with IROS and is also a major international conference with large attendance. The ICRA 2011 conference held in Shanghai, China welcomed more than 1,500 people around the world. The acceptance rate for ICRA is about 45%.

ICRA also has yearly themes. The ICRA 2011 conference's theme was "Better Robots, Better Life". The ICRA 2012 conference theme will be "Robots and Automation: Innovation for Tomorrow's Needs". Again, if you are thinking about submitting something to ICRA, don't worry about the themes. Just submit what you have on whatever topic, as long as it is related to robots or automation.

I have submitted a paper to ICRA before, but very unfortunately, the paper fell into the hands of several electrical engineer reviewers because I picked the wrong key words. They seem to hold grudges against computer science researchers. The same paper was accepted at IROS without any major revision. It is likely that I'll be submitting to ICRA again in the future, but I will be super careful about what key words to use this time!!

The next ICRA conference ICRA 2012 will be held at St. Paul, Minnesota, USA.
Conference Dates: May 14-18, 2012
Submission Deadline: September 16, 2011

AI Robot Related Conferences and Journals For My Research Part 4


Video of the Day:

This is why the back of my jersey has Messi's name on it!

Tuesday, April 07, 2009

AI Robot Related Conferences and Journals For My Research (Part 2)

AI Robot Related Conferences and Journals For My Research Part 1

Top Conferences
==================================================================

HRI -- ACM/IEEE International Conference on Human-Robot Interaction

HRI is a single-track, highly selective annual international conference that seeks to showcase the very best interdisciplinary and multidisciplinary research in human-robot interaction with roots in social psychology, cognitive science, HCI, human factors, artificial intelligence, robotics, organizational behavior, anthropology and many more. HRI is a relatively new and small conference because the HRI field is relatively new. The 1st HRI conference was actually held here in Salt Lake City, Utah, in March 2006, and my advisor, Dr. Michael A. Goodrich, was the General Chair for the conference. It is very unfortunate that I only started grad school two months after the 1st HRI conference and missed this great opportunity. *Sigh* HRI has been growing rapidly and gaining attentions from many research groups and institutions. The last HRI conference (6th) had attendance exceeding 300 participants. HRI is also a top-tier conference with an acceptance rate between 19% and 25%. As the conference becomes more and more popular, researchers from many research disciplines (e.g., human factors, cognitive science, psychology, linguistics, etc.) began participating in the conference.

The venue of the HRI conference rotates among North America, Europe, and Asia. I have been lucky enough to attend the conference twice, once in 2010 and once in 2011. In 2010, I attended the HRI Young Pioneer Workshop. The workshop is a great event because you not only get to make friends with a bunch of young researchers in the HRI field before the conference starts, you also get to see what other young researchers are working on. Besides, NSF is generous enough to cover a good portion of the airfare, which is great help for poor grad students. I liked the workshop so much that I joined the organizing committee for the next year's HRI Young Pioneer Workshop, and also hosted the panel discussion at the workshop. That was also the reason why I was able to attend HRI 2011. Also in both HRI 2010 and HRI 2011, I guarded my advisor's late-breaking report poster sessions because he couldn't make it.

I have never submitted anything to the main HRI conference. Since this is the top conference in my research field, I'd like to publish something before I graduate.

The next HRI conference HRI 2012 (the 7th) will be held at Boston, Massachusetts, USA.
Conference Dates: March 5-8, 2012
Submission Deadline: September 9, 2011



CHI -- ACM/SIGCHI Human Factors in Computing Systems - the CHI Conference

CHI is considered the most prestigious conference in the field of human-computer interaction.It is a multi-track conference held annually. Because of the heavy interests and involvement from industry leaders, large tech companies such as Microsoft, Google, and Apple are frequent participants and organizers of the conference. CHI is a top-tier conference with acceptance rate between 20% and 25%.

Human-Computer Interaction is a broad field that includes both software and hardware. The goal of HCI is to develop methods, interfaces, interaction techniques to make computer devices more usable and receptive to the user's needs. These days computer devices could include a wide variety of things such as cell phones, tablets, game consoles, or gadgets. Thanks to the advancement of sensor technologies, a whole set of rich interaction techniques have emerged to work with gestures, device orientations, and motion of the device.

Many of the HCI design principles, interface designs, and interaction techniques are relevant in Human-Robot Interaction. After all, a robot must have some kind of computer embedded (whether tiny or full-size, whether one or multiple). In many HRI tasks, the human user could very well be interacting with the robot through a regular computer or a touch screen device (think tele-presence, for example). I have never attended the CHI conference before, but I have heard a lot about it from Dr. Dan Olsen at BYU because he was always some kind of chair in the CHI organizing committee. In fact, he'll be the paper chair in the next CHI conference.

The next CHI conference CHI 2012 will be held at Austin, Texas, USA.
Conference Dates: May 5-10, 2012
Submission Deadline: September 23, 2011

AI Robot Related Conferences and Journals For My Research Part 3





Every time when you clip your finger nails, think what you have achieved since you last clipped your finger nail. If you can remember what you did, then you have not wasted your life.



Sunday, April 05, 2009

Predictive Policing and Wilderness Search and Rescue -- Can AI Help?

Santa Cruz police officers arresting a woman at a location
flagged by a computer program as high risk for car burglaries.
(Photo Credit: ERICA GOODE)
I came across an article recently talking about how Santa Cruz police department has been testing a new method where they use computer programs to predict when and where crimes are likely to happen and then send cops to that area for proactive policing. Although the movie "Minority Report" by Tom Cruise immediately came to my mind, that was actually not the same. The computer program was developed by a group of researchers consisting of two mathematicians, an anthropologist, and a criminologist. The program uses a mathematical model to read in crime data in the same area for the past 8 years and then predict time and location of areas with high probability for certain type of crimes. The program apparently can read in new data daily. This kind of program is attracting interests from law enforcement agencies because the agencies are getting a lot more calls for service when the number of staff is much less due to poor economy. This requires them to deploy resources more effectively. The article did not disclose much detail about the mathematical model (because it is a news article, not a research paper, duh), but it is probably safe to assume the model tries to identify patterns from past crime data and then assign probabilities to each grid cell (500 feet by 500 feet).

A multi-modal probability distribution predicting likely places
to find the missing person and a UAV path generated by algorithm.
I found this article especially interesting because in my research I am solving a similar problem with a similar approach. My research focuses on how an Unmanned Aerial Vehicle (UAV) can be used more efficiently and effectively by searchers and rescuers in wilderness search and rescue operations. One part of the problem is trying to predict where are likely places the lost person might be found. Another part of the problem is to generate an efficient path for the UAV so it will cover those high probability areas well with limited flight time. I've also developed a mathematical model (a Bayesian approach) that uses terrain features to predict the lost person's movement behavior, and also incorporate human behavior patterns from past data in the form of GPS track logs.

In both cases, the problems arise because of limited resources.

It is very important to remember that no one has the real crystal ball, so predictions can not be 100% correct. Also the prediction is in the form of a probability distribution, meaning in the long run (with many cases), the predictions are likely correct a good percentage of the time, but for each individual cases, the prediction could very possibly be wrong. This applies to both predictive policing and wilderness search and rescue.

Another important question to ask is how do you know your model is good and useful. This is difficult to answer because, again, we don't know the future. It is possible to pretend part of our past data is actually from "the future," but there are many metrics, what if the model performed well with respect to one metric, but performed terribly with respect to another metric? Which metric to use might be related to the individual case. For example, should number of arrests be used to measure the effectiveness of the system? Maybe by sending police officers to certain areas would scare off criminals and actually result in a reduction in number of arrests.

The predictive policing problem probably holds an advantage over the wilderness search and rescue problem because a lot more crimes are committed than people getting lost in the wilderness resulting in a much richer dataset. Also path planning for police offices is a multiagent problem while we only give the searchers and rescuers one UAV.

One problem with such predictive systems is that users might grow to fully rely on the system. This is an issue of Trust in Automation. Under-trust might waste resources, but over-trust might also lead to bad consequences. One thing to remember is that no matter how complicated the mathematical model is, it is still a simplified version of the reality. Therefore, calibrating the user's trust becomes an important issue and the user really need to know the strength of the AI system and also the weakness of the AI system. The product of the AI system should be complementary to the user to reduce the user's work and remind the user places that might be overlooked. The user also should be able to incorporate his/her domain knowledge/experience into the AI system to manage the autonomy. In my research, I am actually designing tools that allow users to take advantage of their expertise and manage autonomy at three different scales. I'll probably talk more about that in another blog post.

Anyway, it's good to see Artificial Intelligence used in yet another real life applications!







You shall not carry a brass knuckle in Texas because it is considered an illegal weapon (but in California you'll be just fine). Don't you love the complication of the US legal system, which by the way, serves big corporations really well.





Friday, April 03, 2009

AI Robot Related Conferences and Journals For My Research (Part 1)

Since my dissertation will be a paper-based dissertation, I need to publish a bunch of papers. My advisor has asked me to think about a schedule and a plan for where to submit my papers. There are many AI robot related conferences and journals out there. However, only some of them are quality ones. In this blog post I'll list some of the top ones, discuss what each conference is about, and identify paper submission deadlines. So if you are also thinking about publishing papers in the AI robot field, look no further. I've already done the homework for you.

Top Conferences
==================================================================

AAAI -- Association for the Advancement of Artificial Intelligence

AAAI is a top-tier multi-track conference held yearly. It is a prestigious conference with an acceptance rate roughly between 25% and 30%. A very wide range of AI topics are covered at the conference including multi-agent systems, machine learning, computer vision, knowledge representation and reasoning, natural language processing, search and planning, integrated intelligence, robotics, and etc. The conference also includes many tutorials, workshops, consortium, exhibitions, and competitions.

The AAAI conference is devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. It also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.

I have been fortunate enough to attend the AAAI conference twice (2007 in Vancouver, BC and 2010 in Atlanta, GA, USA) and published a paper at the 2010 conference under the Integrated Intelligence track. I was also invited to present a poster for the same paper. In that same conference, I watched and blogged about the Robot Chess Competition.

The next AAAI conference AAAI-12 (the 26th) will be held at Toronto, Ontario, Canada.
Conference Dates: July 22-26, 2012
Submission Deadline: February 8, 2012 (roughly)


IJCAI -- International Joint Conference on Artificial Intelligence

IJCAI is also a top-tier multi-track conference held biennially in odd-numbered years. The acceptance rate for this conference is roughly between 20% and 26%, which makes the it even more selective than many AI journals. It also covers a wide range of AI topics such as multiagent systems, uncertainly in AI, and robotics and vision. One difference between AAAI and IJCAI is that IJCAI is more of a real international conference with paper submissions from all over the world.

One thing great about this conference is that all proceedings of the papers are free to all from their web sites. They also provide video recordings of each session at the conference, so you don't have to be present at the conference and can still watch all the presentations as if you were really there. This is very rare for AI robotics conferences as far as I know and it is a wonderful service they are providing!!

I have not had a chance to submit anything to IJCAI. If possible, I'd really like to give it a try for the next one, and if you read the next line, you'll know why.

The next IJCAI conference IJCAI-13 (the 23rd) will be held at Beijing, China.
Conference Dates: August 3-9, 2013
Submission Deadline: January 25, 2013 (roughly)

AI Robot Related Conferences and Journals For My Research Part 2


Picture of the Day:

We found this very friendly black poodle wandering on the street, so we took her home while we looked for her owner. She was such a wonderful little thing and our entire family loved her. Luckily the owner saw our posts online and contacted us. My daughter was very sad to see her go, but we are very happy that she could finally go home! So if you have a pet, make sure you have a tag with your address and phone number. Could be real handy at times!

Thursday, April 02, 2009

Thrun and Novig Offer Free "Intro to AI" Course at Stanford

If you have studied Artificial Intelligence before, you probably already know who Thrun and Novig are. For those who have no idea, let me first introduce the two prominent researchers in the AI community.

Sebastian Thrun is a Computer Science professor at Stanford University. He also works for Google at the same time. His most famous achievement was to lead the team that won the 2005 DARPA Grand Challenge, a competition where unmanned cars had to cross the Nevada desert fully autonomously. He also helped develop the Google self-driving car, which has been secretly driving all over California freeways and local streets (I am not really sure if this is legal of not). He is also the co-author of a wonderful book, Probabilistic Robotics, which should be, in my opinion, a must read for robotics researchers. I have had the pleasure of meeting him briefly at a AAAI conference.

Peter Norvig, who had worked for Sun and NASA before, is now director of research at Google. He is the co-author of the most famous AI textbook, Artificial Intelligence: A Modern Approach, a book any AI researcher should have in his/her collection. This book was also my textbook when I took the Intro to AI course at BYU. I was fortunate enough to attend a presentation Dr. Norvig gave at Carnegie Melon Ames Campus and asked him many questions (maybe too many). His current research interest lies in data-driver approaches to solving AI problems, which should comes to no surprise since he works for Google.

So here's the great news!! The two will be teaching the course Introduction to Artificial Intelligence at Stanford during the Fall Semester of 2011 and have decided to open the course to everyone for free. This means not only all the course materials will be publicly available, including videos of the course in 15 minute chunks for your convenience (lecture runs 75 minutes long), you can also do all the homework/course assignments/quizzes and take exams just like a real Stanford student. They will be graded and if you pass the course, you get a certificate of completion from the instructors. You can also compare your grade to the grades of the real Stanford students.

Anyone can sign up for the course for free at this web page up to September 10th. Many of my friends have already signed up. The Intro to AI class I took at BYU is one of my favorite classes ever in my entire life. I am sure the one offered at Stanford will be just as fun. I, however, don't plan to do any of the assignments or take any of the exams, :) and will only enjoy the videos of the lectures.


I think it is wonderful that people are offering their teaching and knowledge to the entire world for free, because of their love and passion for the subjects. In the past, MIT has offered free courses online and there is, of course, the famous and wonderful Kahn Academy. If we all contribute a little to the world without thinking about what we get in return, we can make the world a better place every day!

Picture of the Day:

I was at the little restaurant called The Italian Place enjoying lunch when I found this on my computer. Is this because of the Italian connection?

Sunday, March 29, 2009

Obama Announces National Robotics Initiative of $70 Million Per Year

With the possibility of graduation actually within the horizon, I thought it might be a good idea to start a new topic in my blog: Robotics Jobs. This will help me research on what kind of robotics jobs are out there since I don't plan to be a professor and stay in Academia. It is encouraging to see more and more robotics jobs in the industry emerging and start to make a difference for people's lives, although most of them are small start-up companies. Hopefully this series of blog posts will be interesting and helpful for other people who are also searching for the right robotics related jobs. I see a new great era of robotics applications just about to knock on our doors, and it is great to be a part of this effort to transform new technologies developed at university research labs into the real world and change the world! Good luck to me and all robotics job hunters out there! And I'll start the series with the positive news that U.S. President Obama is allocating funding to create more robotics jobs!

Obama Giving a Speech at Carnegie Mellon
(Credit: White House)
In a recent visit to Carnegie Mellon University's National Robotics Engineering Center, Obama announced a new National Robotics Initiative seeking to advance "next generation robotics." The new initiative will provide $70 million per year to fund new robotics projects, focusing on robots that can work closely with humans. The funding will be squeezed out from the National Science Foundation, the National Institutes of Health, NASA, and the Department of Agriculture.

Obama Meeting Japaneses Android (Credit: AIST)

Obama loves robots (see photo on the left) and has a strong belief that advancing technology makes US companies more competitive and creates more jobs. But in order to help the economy grow, one important factor is how technology can be transitioned from research labs (the Academia) into the business world (the industry). Therefore, it is likely the money will be spent on research projects that are in a sense more applied than fundamental research, and private companies working in collaboration with university research labs will also have access to this funding and the money can be spent on developing real commercial robots -- really creating more robotics jobs!!

In the past, private robotics companies have had opportunities to get funding from the government mostly through military agencies to develop robotics weapons. Some also get a bit of money through the program called Small Business Innovation Research (SBIR). The new initiative focuses on how humans and robots can work as a team where humans can supervise and advice robots using human expert knowledge. This is very different from programming an industry robot to perform dull repetitive tasks that required precision and speed, for example, in a food processing plant. This is especially good news for me because my research focuses on how human can better manage AI/robot autonomy leveraging their rich experience and domain expertise in dynamic tasks and environments. The program solicitation states:
This theme recognizes the emerging mechanical, electrical and software technologies that will make the next generation of robotic systems able to safely co-exist in close proximity to humans in the pursuit of mundane, dangerous, precise or expensive tasks. Co-robots will need to establish a symbiotic relationship with their human partners, each leveraging their relative strengths in the planning and performance of a task. This means, among other things, that for broad diffusion, access, and use (and hence, to achieve societal impacts), co-robots must be relatively cheap, easy to use, and available anywhere. As the US population ages and becomes more culturally and linguistically diverse, these co-robots may serve to increase the efficiency, productivity and safety of individuals in all activities and phases of life, and their ubiquitous deployment has the potential to measurably improve the state of national health, education and learning, personal and public safety, security, the character and composition of a heterogeneous workforce, and the economy, more generally.
I applaud Obama's effort in advancing robotics technology and creating more robotics jobs! Although the funding is still very small compared to, for example, the $20 billion per year the government is spending in air conditioning for troops in Iraq and Afghanistan, especially when robots are expensive (e.g., a Honda UAV Copter costs $300K and a humanoid robot costs $300K-millions). But it is certainly a good start. Let's hope whoever gets elected as the next president will keep such initiatives alive!



Obama's speech about robots and technology.

You can listen to Obama's entire speech (above) if you are bored. You can also check out the IEEE Spectrum article for more details. The speech mentioned about Obama's visit to a local company called RedZone Robotics, who makes robots to explore water and sewer pipes. Guess I'll have to check this company out and then post a blog about it next time. Enough for this one. Ciao!

Video of the Day:


A funny robot video from the Portal 2 game.

Saturday, March 21, 2009

Machine Learning and Chicken Fighting

When chickens attack, we have AI
(photo credit: Olena Istomina/iStockphoto)
Does machine learning have anything to do with chicken fighting? Turned out they could certainly be related according to this article I ran across at IEEE Spectrum.

A casual lunch conversation between robotics engineer Stephen Roberts and animal welfare specialist Marian Dawkins sparked an idea of using pattern-recognition technology to help identify misbehaving hens.

Chicken fighting is a difficult and costly problem for poultry farmers. Deprived of space to forage and peck (birds have the instinct to peck the ground in search of food), chickens are likely to peck at other things, like one another, and sometimes to death. A couple of tense chickens might evolve into an all out fight. But if a farmer can identify these stressed-out chickens early, the problem can be reduced. However, trained experts cannot watch the flocks constantly.

Stephen has been researching on human crowd movement using using a machine-vision system based on optical flow -- a technique that measures how the pixels "flow" from one frame to another frame in a video in order to identify object movements. When this technology is applied to chicken monitoring, especially on observing how they walk, it worked great -- after processing footage of more than 300000 commercial free-range chickens, the machine predicted troublemakers matched the flocks that had the most feather damage.

Here's an example video of the optical flow technique.

In order for machine learning algorithms to work with such a problem, the designer must first select features to track. A feature could be the average value of a group of pixels, or a moving pattern. Depending on the machine learning algorithm used, the number of features can span a large range (because some algorithms are smart enough to pick the "better" features to use). Chicken exports then label each data record (one footage of a chick walking) in the training set (say a few hundred records) identifying whether it was a stressed-out chick or not. The machine learning algorithm is then ran against the training set, maybe evaluate its own performance and try to improve against a held out set. The learning result is like a dividing surface in a multi-dimensional space (for example, a line divides a plane or a plane divides a 3D space), and points landing on one side of the dividing surface indicate troublesome chickens while points on the other side of the dividing surface show well-behaved chickens. Stephen had used Hidden Markov Chains (HMC) in his machine learning algorithm according to the IEEE article.

The biggest benefit of this approach is that a computer can take on the laborious constant monitoring work in place of human and only warn farmers when suspecting chickens are identified. Even if there is a good amount of false-positives, the human workload can still be dramatically reduced.

I am very glad to see another real application of AI techniques. Just one more thought: would the chickens behave better if we played ambient music to them all the time?

Video of the Day:

The biggest water balloon fight recorded in human history (4000+ people) right here at BYU in July 2010 during the shooting of Kyle Andrews' official music video "You Always Make Me Smile."

Friday, March 20, 2009

iPad -- a new tool to help treat autism?

Leo has plenty of toys, including this circular balance
beam, but nothing tops the iPad. (Photo by Kelly Nicolaisen)
iPad is probably the most popular high-tech toy on the market right now. But can it be more than just a toy and be useful in other ways, such as helping treat children with autism? The answer is a definite yes.

You might wonder why I am all of a sudden interested in the subject of treating children with autism. Actually, I have been a member of the TiLAR research group working on using assistive robotics technology to help treat children with autism. TiLAR stands for Therapist-in-the-Loop Assistive Robotics.

Recently, I came across an article named iHelp for Autism from SF Weekly. The article told multiple stories of how an iPad helped improve the behaviors of Leo, a 9-year-old artistic child, the son of Shannon Rosa. Shannon had won the iPad from a school raffle, and in the following months, she was pleasantly surprised again and again by how the iPad changed Leo's life and her own, to a better effect, that's of course.

It is important to note that the word autism could mean a spectrum of psychological conditions, that's why the more formal name Autism Spectrum Disorders (ASD) is frequently used. Despite the wide spectrum, autistic children display common symptoms of deficiency in social interactions and communication and severely restricted interests and highly repetitive behavior.

So what are the benefits of using an iPad? Here I'll list some just off the top of my head:
  • It looks slick and cool (remember it won't be like that forever).
  • It's really a platform, so you can run all kinds of things on it like songs, movies, games, maps, etc.
  • You can select things, move things, draw things with your fingers -- a multitouch interface -- and you have plenty of space to do it (unlike an iPhone). The interface is also relatively simple and intuitive, so it doesn't take a lot of time to learn or explore.
  • It's relatively lightweight and is battery powered, so you can carry it with you everywhere you go.
Of course an iPad also has disadvantages:
  • It's expensive.
  • It's fragile (especially the screen).
  • Battery life is not great.
  • Apps have to be approved by Apple (but there's always iPad like devices running Android).
Now when we give an iPad to an autistic child, what could be good?
  • Because an iPad is a fashionable item (at least for now), it would encourage the child to participate in more social activities while holding an iPad -- confirmed by a study.
  • The intuitive and simple finger-controlled interface is attractive to autistic children because they can easily identify things and things are predictable. The finger touch interface is also great to encourage the autistic children to practice manipulate things with their fingers, improving motor skills that are normally problematic for autistic children.
  • Because an iPad can run many games and apps, autistic children are more prone to play educational games on an iPad, so there's more opportunity to learn while feeling good about it.
  • The portability of iPad allows the child to use it anywhere the child wants, and movie playing capability can let the child watch instructional demonstrations -- termed Video Modeling -- frequently and in various locations.
  • An iPad can also act as a communication tool. For example, an autistic girl used an iPad to tell her mother where she'd like to go shopping.
So what could go wrong when we give iPad to an autistic child?
  • One bad tantrum, there goes the screen, and it's expensive!
  • If the child is over-reliant on the iPad as a communication tool, once the battery is dead, the child might go berserk.
As I mentioned earlier, autistic children tend to behave differently from social norms and have problems communicating with others. It seems that we ought to look at three different dimensions when we think about treatment. The first dimension is deviation from social norms. The child can behave more like social norms after treatment or deviate further away. The second dimension is the ability to communicate. A treatment might help the child to communicate better or make it worse. The third dimension is the ease for caregiver. A treatment can change the child's behavior so it's easier for the caregiver to take care of the child. It might also make the care-giving more demanding/challenging.

Ideally, we'd like the treatment to move the child positively on all three dimensions, but that might not always be true depending on the kind of treatment we provide and the kind of tool we use. Some people might also want to settle at different spots in this three-dimensional space. They'd accept a solution that improves the child's ability to communicate and makes the caregiver's life easier while the child's behavior might deviate further away from social norms. Therefore, one important question to ask is: Where in the 3D chart do you want the autistic child to be? For example, if an autistic child always uses the iPad to tell you what he wants, is that what you want? If the answer is no, then the iPad might actually have done harm instead of good.

Are there other creative ways of using an iPad to help support treating children with autism? The article mentioned that some researchers are actually using iPad to help collect physiological data of the autistic child on-the-go and maybe play a soothing sound/music if they get tense. I'll throw out some ideas of my own just for brainstorming purposes. If you can think of any, feel free to tell me in the comments section.
  • Since we use a robot to assist the therapist in clinic sessions, the iPad can be used by the therapist before the session to program robot behaviors.
  • We can put a virtual character in the iPad to encourage the child to imitate the character's moves. The virtual character can also encourage the child for certain behaviors such as turn taking with the therapist and the robot. The therapist can act as the sensor and use wireless devices to "inform" the virtual character if the child has performed the desired behavior.
  • Maybe let the child use the iPad to choose what games to play with the robot and/or the therapist?
  • The child can also use the iPad to tell the robot what to do (different sequences of moves). Later the child will be required to not only choose buttons on the iPad, but also speak out the request, for the robot to actually perform the moves. Either the robot will try to recognize the speech, or the therapist can be the sensor/processor and issue the approval command instead.
  • The robot might also touch the iPad to do things. We just have to make sure the robot doesn't damage the delicate screen.


Picture of the Day:

TiLAR research group's robot Troy in the middle of a clinical
session with a therapist and a kid.

Thursday, March 19, 2009

Chess Playing Robots at the AAAI-10

Many people are probably aware of the world-famous chess match between Garry Kasparov, a world champion, and Deep Blue, a super computer built by IBM, that took place in 1997. Deep blue won the match, but only with the help of humans because it couldn't really move the chess pieces without an arm. That is no longer the case, especially at the Small Scale Manipulation Competition held at the 24th AAAI (Association for the Advancement of Artificial Intelligence) conference at Atlanta in July 2010, where four robots from different universities paired up against each other and moved all the chess pieces themselves. "Small Scale" here means robots that are smaller than the size of a human, and the goal was not to beat the opponent in a game of chess, but to manipulate chess pieces adeptly and accurately. Extra points can be earned by showing the ability to recognize the pieces on the fly. The competition was one of the many great treats at the conference. As a conference attendee, I was fortunate enough to observe the real duels with my own eyes.

Gambit -- University of Washington Intel Lab
"Gambit" is a robotic arm built by the University of Washington Intel Lab using funding from Intel, and interestingly, one of the two main builders of the robot is actually an old acquaintance I had met at the HRI conference earlier this year in Japan. Her name is Cynthia, and with this connection, I was able to dig out quite some information about how the robot works. Gambit is equipped with both a depth camera and a regular video camera. It uses SIFT features for recognizing pieces on the board and also uses dead reckoning to remember the positions of them. The robot is even smart enough to line up the opponent's pieces it had captured neatly by the side of the board. The gripper has tactile sensors built-in, but according to Cynthia, they aren't very useful, and she had to spend a good amount of time picking the right kind of material for the gripper so it can grab onto a chess piece firmly. One special trick the robot has is the ability to call for help whenever it gets stuck or couldn't reach certain positions. The cost of the robotic arm is relatively cheap (a few thousand dollars) and the university is actually promoting it as a research platform to other researchers.


Chiara -- Carnegie Mellon University Tekkostsu Lab
The strange scorpion look alike robot on the right is "Chiara", a robot built by the Carnegie Mellon University Tekkostsu Lab. It is also an open-source hardware/software platform promoted by the lab priced around several thousand dollars. Before making a move, Chiara would first walk to align itself at the right location, then raise itself high and use its gripper-stinger to pick up the right chess piece. The mobility of the robot seemed really cool, and I thought it was needed because otherwise the robot wouldn't be able to reach pieces at the other half of the board. Turned out, the mobility was only there because it was part of the platform. The robot is actually only able to play half of the board. But because the competition only focused on the first ten moves of each robot, they were able to get away with the limitation. This robot is a vision-only robot, meaning it doesn't use ranger sensors such as infrared, sonar, or laser. Due to the special pattern of the chess board, recognizing the board is not a very difficult task, and the robot performed relatively well during the competition.


Georgia Tech's robot is a massive, expensive looking arm. I would have guessed the price range of the robot to be somewhere around $100K. A Swissranger depth camera was held above the chess board (from a tripod by the side) separately from the robot in order to read the board and chess pieces positions. Ironically, in the first move of the game, the arm misbehaved and made a big swing to the side, almost knocking over the camera-holding tripod. That totally messed up the camera calibration, and the Georgia Tech team got heavily penalized by the judges because they had to reposition the camera and recalibrate everything in order for the robot to work correctly.



The robot built by University of Alabama is definitely the champion with respect to cost. The designer of the robot proudly told me that the entire robot cost less than $700. It uses an iRobot create robot platform as the mobile base. Then a hobby robotics arm kit is used for the arm (our lab has a robotic arm very similar to this that cost around $400). An android phone with a built-in video camera is held above the chess board in order to recognize the board and pieces. Then a Netbook running Ubuntu is used to control and process data from each of those three parts separately through a wireless network. This robot is also a mobile one. It moves around the table in order to align itself to the necessary positions. In the first game though, the members of the team got quite frustrated because it would take forever for the Netbook to download data, which was not the case during the testing in previous day. Turned out they were using the conference shared wi-fi network, which became quite congested at the time of the competition and slowed everything to a crawl. In the second day of the competition, they used their own wireless network, and the situation was improved dramatically.

Each robot played against all the other robots, and the total points were tallied to identify a winner. Eventually, Gambit from University of Washington won the championship with flying colors (or is it really flying arms). The video below was made by the winning team in celebration of their triumph. VERY INTERESTINGLY, part of me and my voice were captured in this video as well, proving that I was actually there!! So here's your challenge of the day: see if you can find me in the video! The video also showed a match between a kid (rumored to be a world-ranked player) and Gambit toward the end, but I don't know who actually won.

Gambit's journey to Championship

In each and every AAAI conference (at least for the last few years), there's always a robot competition and the competition is always great fun! I am so looking forward to next year's competition. Now I just have to write a good paper before the submission deadline and hope it gets accepted....

Picture of the Day:

Beautiful night landscape of Atlanta (taken from the 56th floor of the Peachtree Plaza Hotel in downtown)

Friday, March 06, 2009

Robots Used to Help Fight the BP Gulf of Mexico Oil Spill

The Deepwater Horizon drilling rig burns as oil
pours into the Gulf of Mexico.[The Canadian Press]
If you follow the news somewhat, then you probably have heard about the Oil Spill at the Gulf of Mexico. The BP-owned well has been leaking for nearly two months, causing great disaster to the environment and ocean lives. In the process of fighting the oil leak, many new robotics technologies were put to test and proved to be very handy tools for gulf researchers.

iRobot Seaglider Robot
One robot used is the iRobot Seaglider, a deep-diving UUV (Underwater Unmanned Vehicle) used to detect and map leaked oil in the gulf. Seaglider was originally developed by University of Washington and later acquired by iRobot in June 2008. The robot is powered by changes of buoyancy and does not need a traditional propeller, which allows it to go on missions that last many months long. Sensors on the robot can detect levels of dissolved oxygen in the water, temperature, salinity, other ocean properties, and the presence of oil all the way down to 1,000 meters. The robot has been used to locate and monitor clouds of dispersed oil droplets. Data can be uploaded to a satellite and then distributed via the web to any web-capable devices around the world.

Scripps Glider: Spray
Another robot used is the Spray robot, which is also a submarine type underwater glider, developed under ONR (Office of Naval Research) support by Scripps Institute of Oceanography and Woods Hole Oceanographic Institution Scientists. The robot is "fitted with a sensor similar to the one used used to measure chlorophyll, which is essential for the growth of plants." Scientists hope that the sensor can also pick up the spreading oil leak. The glider can dive up to 1,500 feet deep, and then periodically surface to relay data back to the scientists. It is also possible to use an i-Phone to command the glider, such as telling it to go up or down, turn, or to turn on/off sensors.

BP engineers also used ROV (Remotely Operated Vehicles) type robots to wield clamps and haul machinery in slow machine below the surface of the gulf to cap the renegade well. Video feeds showed the robots using a circular saw-like device to cut small pipes around the leaking riser. I cannot find more information about what kinds of ROV robots were used, but from the video below (from the perspective of the ROV robot) you can see that the ROV robot had arms and grippers tele-operated by human operators to perform manipulation tasks.


Isn't it amazing that we see robots everywhere these days? They are in the air, under the sea, and even in people's homes (not mentioning those ones on other planets). I sure have picked the right career! :)

Picture of the Day:

Giant Isopod that hitched a ride on an ROV in the Gulf of Mexcio. The subject of Isopocalypse 2010.

Monday, March 02, 2009

AI and Robots: Who Gave the Robot a Knife?

A few words first: To make it easier to find posts that interest you, I've added a search box to my blog (you probably have noticed it right above my post) that searches through all my blog posts but not anything outside of my blog. Just on the right side oft each post, you can also click on different blog labels to read my posts by category. At the end of each blog post, I've also included icons you can click to share the post with your friends using your favorite social network tools. Spread the good word if there's something you really enjoy! Okay, the real post starts below.
================================================================

 
Robot arm stabbing a human volunteer with a knife 
(Photo credit: IEEE Spectrum)
At the ICRA 2010 conference (IEEE International Conference on Robotics and Automation) that is currently ongoing in Anchorage, Alaska, some German researchers presented their latest research on the biomechanics of soft-tissue injury caused by a knife-wielding robot. The paper is titled "Soft-tissue Injury in Robotics." In other words, they wanted to find out what will happen if a robot holding a sharp knife erroneously stabs a person.And, no, I am not joking. The robot arm in the picture on the right is really holding a knife, and it really stabbed the guy's arm with it.

These researchers are from the Institute of Robotics and Mechatronics, part of DLR, the German aerospace agency, in Wessling, Germany, and they share the same dream with me --- that one day robots will be smart enough to take over kitchen duties and free us from the laborious duty of cooking. This task of course requires the robot to be able to handle a knife appropriately, so it can cut, chop, slide or dice during the course of preparing a meal. But what if it accidentally struck a human? With that question in mind, these researchers performed a series of experiment to investigate the severity of possible injuries and also designed a collision-detection to minimize the damage.

Various knives used in the experiments  
(Photo credit: IEEE Spectrum)
They mounted various sharp things, from knife to scissors to screwdriver (why does this somehow remind me of GTA San Andreas? Shudder!), to a DLR Lightweight Robot III, or LWRIII, a 7 degrees-of-freedom robot arm, and then tested the striking on a block of silicone, a pig's leg, and eventually, on the bare arm of a human volunteer. The collision-detection system turned out to be very successful, because the volunteer still has his arm.

The video below shows how the experiments were performed and how the robot arm performed differently with and without the collision-detection system (the real excitement is at the end of video). As a researching in Human-Robot Interaction myself, I couldn't help but imagine this poster in my head that reads, "Volunteer needed for a user study: Get Paid to be Stabbed by a Robot!"


But I am a little bit confused. Once turning on the collision-detection system, the robot will stop cutting/stabbing the human. The human is safe now, and so is that piece of steak! Three hours later, I'd be shouting in starved voice, "Where's my steak dinner?"

Note that the idea of a robot holding a knife would never be allowed in US universities. It would never get approval from the IRB (Institutional Review Board). See, we do things very differently here in the US, instead of knives, we give robots machine guns and missiles!! And there will be no danger to US citizens, because we send these robots to other countries! LOL!

MQ-9 Reaper Predator UAVSWORD Robot
I think I am a bit off topic now, so let's get back to these German researchers. If I remember correctly, I actually saw a video from HRI 2007 made by the same guy demonstrating how he would let a powerful robot arm punch him in the head (Sorry I am having a hard time finding this video now). The robot arm would VERY QUICKLY slow down when it detects the collision, thus sparing the guy's life. Well, hats off to the guy!! Comparing to him, I am a coward, because I would never put myself under such conditions --- because I am a terrible programmer, and I have lots of bugs in my code. And my admiration for him went sky-high when I realized they also performed the following experiments. Ouch!!


Anyway, I think it will be a long time before we actually have knife-wielding robots that roam our homes. When I program my robots, I actually intentionally make it not touch things such as knives, gas stoves, and explosives. But I bet you this day will eventually come, and a lot of lawyers are going to get rich.





Drinking excessive amount of Mountain Dew and staying up till 4am can lead to severe stomach cramping and internal bleeding and many days of lost productivity.

Monday, February 23, 2009

AI and Robots: VEX Robotics Competition World Championship

Two days from today, and between April 22 and 24, the 2009-2010 VEX Robotics Competition World Championship will be held at Dallas Conventional Center where over 3000 contestants from 14 countries around the world will meet and fight their guts out (correction, fight their robots' guts out).

It is interesting that I only heard about this competition a few days ago from my wife because she is actually working on arranging hotel and travel for the Chinese team. Therefore, I looked it up and hence, today's blog post. :)

The main sponsor of the competition is a company called Vex Robotics Design System, who makes and sells robotics kits to hobbyists and young students. At the beginning of the season each year, the organizer would announce a new challenge, and students around the world can then form teams to compete in this world-wide competition using robots built from, of course, the VEX robotics kit. Contestants contained mostly middle school and high school students. However, even elementary school students can compete in this competition. These teams then compete against each other at the local and regional level until finalists are determined who then compete in the world championship. The competition is presently in its third season. The challenge for the 2008-2009 season is called Elevation Challenge, and the new one for the 2009-2010 season is called Clean Sweep Challenge.

The video below is from last year's world championship, also held at Dallas Conventional Center.


This year's challenge is the clean sweep challenge where two teams, each team using two robots, are divided into two courts, and the goal is to rack up as many points in a fixed time by pushing, shoveling, throwing, and dumping balls out of the team's own court and into the opponent's court. In the first 20 seconds of the game, the robots will play autonomously by running programs written by the contestants. For the remaining duration of the game, each robot is teleoperated using a remote control by contestants. Each team is free to design the robots anyway they like, and the only constraint is that the size of each robot can not exceed a certain limit (read the detailed description of the rules). The video below is the game animation describing the game in detail.


Since each team has to fight all the way from local to international, there are plenty of videos of games played at different cities and regions. The video below shows a game played by team number 8888 from La Salle High School in the semifinals (probably at the country level). You can probably see that during the first 20 seconds, the robots looked very dumb and didn't really do much. This is probably due to the difficulty for pre-college-level students to master and implement advanced AI algorithms and techniques. However, the students still have to put in a lot of effort designing and implementing these robots from an mechanical engineering perspective. Still though, it would be so nice if we see people designing fully autonomous robots (or robots with supervisory control) to compete in such interesting games.


I wish all the contestants the best luck in the upcoming world competition. I am sure they will all have a ton of fun and hopefully many of them will grow up into sincere robotists.

Video of the Day:

The street magician!

Wednesday, February 11, 2009

AI and Robots: Hybrid Video Display of Visible and Infrared Might Help Search and Rescue

A recent article in New Scientist discussed a research project performed at Brigham Young University where the effect of combining visible light and infrared on real-time video footage in Wilderness Search and Rescue was evaluated.

The research project was directed by Dr. Bryan Morse (who is also one of my committee members) and implemented by Nathan Rasmussen (a friend of mine, who successfully received his MS from this project and graduated in 2009). It is one of the many projects in the WiSAR research group at BYU that works on how to use mini-UAVs (Unmanned Aerial Vehicles) to support Wilderness Search and Rescue. The picture on the right shows Nathan throw-launching a UAV in a field trial at Elberta, Utah.

This research focuses on the human-robot interaction aspect and try to determine which method of display works better for human operators: displaying visible light video side by side with infrared video, or combine both in a hybrid display.

The UAV used in the experiments can already carry both a visual spectrum camera and an infrared camera (BTW: very expensive). Visible light video footage can be useful in spotting objects of unnatural/irregular shapes and colors (top view). Infrared light video footage, on the other hand, can be helpful in detecting objects with distinct heat signatures that are different from surrounding environments (especially early mornings, evenings, and nights, or in cold weathers where heat signatures are more distinct).

In order to align footage from both sensors, a calibration grid was created with black wires on a white background. To allow the infrared camera to "see" the grid, an electricity current was sent down the wires to heat them up. An algorithm is then used to align the vertices of the two grids to compensate for the slightly different viewing angle.
Once the hybrid view was possible, a user study was performed where students were used as test subjects to watch UAV videos in both methods and tried to identify suspicious objects while listening to audio signals (counting beeping sounds as a secondary task in order to measure mental workload). I happen to be one of the test subjects, and my hard work earned me some delicious chocolates.

Experiment results show that people who viewed the hybrid display performed much better in the secondary task of counting beeps. This suggests that the hybrid video is easier to interpret (requiring less mental work) and would allow the searcher to focus more on identifying objects from the fast moving video stream.

The research was presented at the Applications of Computer Vision conference in Snowbird, Utah, in December 2009. If you are interested in more details about this research, you can read Nathan's thesis (warning: 22.4MB).

Picture of the Day:


Beautiful dusk sunshine mountain view from my house!