Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Showing posts with label My Research. Show all posts
Showing posts with label My Research. Show all posts

Saturday, April 18, 2009

AI Robot Related Conferences and Journals For My Research (Part 6)

AI Robot Related Conferences and Journals For My Research Part 5 

I have discussed several top conferences related to my research. Now moving on to top symposiums. These symposiums are like workshops where new ideas are presented and discussed to get a reality check from the fellow researchers and also to brainstorm. However, they normally last for several days and give the participants plenty of time to collaborate and discuss things.

Top Symposiums
==================================================================

RO-MAN -- IEEE International Symposium on Robots and Human Interactive Communications

RO-MAN workshop/symposium addresses fundamental issues of co-existence of human and intelligent machines such as robots and recent advancements of technological as well as psychological researches on every aspect of interactive communication and collaboration between robots and humans. Originally founded by several Japaneses researchers in 1992, the symposium has grown to attract much attention from researchers around the world. For example, the last RO-MAN included papers from 17 different countries. Solicited subjects cover a wide range including (but not limited to) ones such as socially interactive robots, entertainment robots, human assisting robots, human training robots, education robots, and robotic arts.

The RO-MAN symposium/workshop is a two-track event held annually, therefore, relatively small, holding about 70 to 280 participants. Papers accepted are mostly six pages long. I have never attended the RO-MAN symposium, and I couldn't find any information on the acceptance rate of the workshop. I would guess the acceptance rate is much lower compared to the top conferences I had blogged about before

Since the last RO-MAN symposium just happened last month, the location for the next RO-MAN symposium Ro-Man 2012 (the 21th) is still unknown at this point.
Conference Dates: July 31-August 3, 2012 (roughly)
Submission Deadline: March 1, 2012 (roughly)



AAAI Spring/Fall Symposium Series

The AAAI Spring/Fall Symposia are great places to meet peer researchers in a more intimate setting and a relaxed atmosphere to share ideas and learn from each other's artificial intelligence research. The topics might change each year depending on symposium proposals received. Multiple symposiums of various topics will be held simultaneously and participants are expected to attend a single symposium throughout the symposium series. Besides the selected participants by the program committee (authors of accepted papers), only a limited number of people are allowed to register in each symposium on a first-come, first served basis, due to limited seats (the Symposium series are actually quite popular).

The Fall Symposium series is usually held on the east coast at Arlington, Virginia during late October or early November.

Each symposium will have a distinct research interest. For example, the AAAI 2011 Fall Symposia have the following seven topics:
  •     Advances in Cognitive Systems
  •     Building Representations of Common Ground with Intelligent Agents
  •     Complex Adaptive Systems: Energy, Information and Intelligence
  •     Multiagent Coordination under Uncertainty
  •     Open Government Knowledge: AI Opportunities and Challenges
  •     Question Generation
  •     Robot-Human Teamwork in Dynamic Adverse Environment
The last one about Robot-Human Teamwork would be the one I am interested in. Sometimes you can find the papers accepted at the symposium-specific web sites. But they really want you to buy the technical report at $35.
 
The next AAAI Fall Symposia AAAI 2011 Fall Symposia will be held at Arlington, Virginia, USA.
Symposia Dates: November 4-6, 2011
Submission Deadline: May 20, 2011


The next AAAI Fall Symposia you can submit a paper to is AAAI 2012 Fall Symposia
Symposia Dates: November 4-6, 2012 (Roughly)
Submission Deadline: May 20, 2012 (Roughly)



The Spring Symposium series is typically held during spring break (generally in March) on the west coast at Stanford. This one is actually my preferred one because Stanford University is not that far away from Utah, and I also lived in the neighborhood for three months.

The next AAAI Spring Symposia include the following six topics:
  •     AI, The Fundamental Social Aggregation Challenge, and the Autonomy of Hybrid Agent Groups
  •     Designing Intelligent Robots: Reintegrating AI
  •     Game Theory for Security, Sustainability and Health
  •     Intelligent Web Services Meet Social Computing
  •     Self-Tracking and Collective Intelligence for Personal Wellness
  •     Wisdom of the Crowd
 I have never attended either the Spring or Fall Symposia, but I was the co-author of a paper that got accepted at the AAAI Spring Symposium 2009 under the topic Agents that Learn from Human Teachers. It would be great if I could publish here again in the near future. It's always fun to visit the Silicon Valley!

The next AAAI Spring Symposium AAAI 2012 Spring Symposia will be held at Stanford University, Palo Alto,California, USA.
Symposium Dates: March 26-28, 2012
Submission Deadline: October 7, 2011



A good friend of mine, Janet, passed away this morning because of Acute Leukemia. Wish her peace in heaven! Lessons learned: Complete all those projects you want to do before a doctor tells you that you only have 5 days to live. Let's see, I need to finish my PhD, finish translating SPW, finish building a robot, and make up all the blog posts. Man! I better get working!




Wednesday, April 15, 2009

AI Robot Related Conferences and Journals For My Research (Part 5)

AI Robot Related Conferences and Journals For My Research Part 4

Top Conferences
==================================================================

BRiMS -- Behavior Representation in Modeling and Simulation

BRIMS is a conference for modeling and simulation research scientists, engineers, and technical communities across disciplines to meet, share ideas, identify capability gaps, discuss cutting-edge research directions, highlight promising technologies, and showcase the state-of-the-art in applications. It focuses on Human Behavior Representation (HBR)-based technologies and practices and bridge the gap across the disciplines of cognitive and computational modeling, sociocultural modeling, graphical and statistical modeling, network science, computer science, artificial intelligence, and engineering.

BRiMS is mainly funded by military research agencies such as Air Force Research Laboratory (AFRL), Army Research Laboratory (ARL), Defense Advanced Research Projects Agency (DARPA), and Office of Naval Research (ONR). Every year, there's always a heavy presence of researchers from these military research labs. Therefore, if you plan to work for a military research lab, this is a great venue to network and meet potential employers.

BRiMS is a single-track conference, hence, a relatively small conference. It does have workshops and tutorials the day before the conference. Interestingly, every other year, the conference is always held at Sundance, Utah, which is only 20 minutes from where I live (which saves my advisor a bunch of money such as the airfare and hotel). Then the other year a location in eastern US will be selected as the conference hosting venue. I have been fortunate to publish at this conference in the past.

The next BRiMS conference BRiMS 2012 (the 21th) will be held at Amelia Island Plantation, Amelia Island, Florida, USA.
Conference Dates: March 12-15, 2012
Submission Deadline: December 10, 2011 (roughly)



DIS -- The ACM Conference on Designing Interactive Systems

DIS is a multi-track conference held biennially. It is the premier, international arena where designers, artists, psychologists, user experience researchers, systems engineers and many more come together to debate and shape the future of interactive systems design and practice. Particularly, it addresses design as an integrated activity spanning technical, social, cognitive, organisational, and cultural factors. As described by the Interaction-Design.org, "DIS conferences are usually attended by people like; user experience designers seeking to go beyond usability; interaction designers developing interactive objects and installations; usability people who want to improve experience for ‘users’; web designers who want to create better Web sites; information architects; user interface designers working across the board, including desktop systems, mobile devices, and interactive products; cognitive and social scientists; human factors folks; games designers involved with characters, narrative and game play; visual designers concerned with information design and the aesthetics of their systems; ethnographers and customer service and many more."

DIS is a prestigious conference which makes competition between submissions high. For example, the acceptance rate for DIS 2008 was 34%. Because interactive design can be used in many aspects, DIS is naturally an interdisciplinary conference, encompassing all issues related to the design and deployment of interactive systems.

The theme of the upcoming DIS 2012 focuses on what happens when interactive systems are used "in the wild". This seems to be a perfect fit for research topics such as using a UAV (Unmanned Aerial Vehicle) in wilderness search and rescue.

The next DIS conference DIS 2012 will be held at Newcastle, UK.
Conference Dates: June 11-15, 2012
Submission Deadline: January 20, 2012


AI Robot Related Conferences and Journals For My Research Part 6





Why do I always forget to eat lunch? This is not the right way to lose weight!



Sunday, April 12, 2009

The challenges of evaluating the search efficiency of a UAV's coverage path (1)

Imagine that you have won a shopping spree sweepstake at the local Walmart. Assuming that you know the layout of the store pretty well and have common sense on how much general merchandises are worth. Now for the next 2 minutes anything you grab are yours to keep for free, what would you do? Would you just start grabbing everything that are close to you such as orange juice, eggs, sausages, and breakfast cereals, or would you dash straight to that 60-inch LCD TV (or that 5 Carat diamond ring if you are a woman) at the furthest corner of the store? What is the best path you should take to maximize the total monetary value of the shopping spree?

Now just to make it a little bit more complicated, what if you only have 1 minute to grab things? Maybe you are asked to start from the cashier's lane and must return to the cashier's lane before your time runs out? What if your shopping cart was tinkered with and it doesn't roll backward? What if getting that 5 Carat diamond ring required a Walmart employee to unlock three things before you can get to it? What if you forgot to bring your glasses and everything looked totally blurry in front of your eyes? Looks like the wonderful dream of winning the sweepstake has just turned into a nightmare! "Why are you making it so hard for me?" you moan. And I shrug and tell you that those are all the challenges I face when I plan a coverage path for a UAV in support of Wilderness Search and Rescue operations.

The benefit of adding a UAV to the Wilderness Search and Rescue team is that now you have an eye in the sky and you can cover large areas quickly and also reach areas that are difficult for human on foot to reach sooner. When planning a coverage path for the UAV with a gimbaled camera, what we really care about is the path of the camera footprint. In our path-planning approach, we use a 4-connect grid to represent the probability distribution of where it is likely to find the missing person. Even though a fix-wing UAV might need to roll and follow a curvy path when it turns, the gimbaled camera can automatically adjust itself to always point straight down and the path of the camera footprint can include sharp 90 degrees turns. As the camera footprint covers an area, it "vacuums up" the probability within that area, and obviously the more probability we can "vacuums up" along the path, the more likely the UAV can spot the missing person.

When we evaluate how good a UAV path is, we focus on two factors: flight time and the amount of probability accumulated along the path. If a desired flight time is set (maybe due to the fact that the battery on the UAV only lasts for one hour), then the more probability we can accumulate, the better the path. If a desired amount of probability is expected (e.g., 80% probability of spotting the person from the UAV), then the sooner we can reach that goal, the better the path. My research focuses on the first type of case only where we plan a path for a given flight duration.

So how good or efficient is the path generated by our algorithm? A natural way to evaluate this is to compare the amount of probability the UAV accumulates along our path against what the UAV can accumulate along the best possible path and compute a percentage. A path with 50% efficiency means the path is half as good as the best we can do. The irony here is that we don't know what the best possible (optimal) path is, and searching for this optimal path might take a long time (years) especially when the probability distribution is complex (like how it is in real search and rescue scenarios), and it defeats the purpose of finding the missing person quickly. Many factors can also affect how the optimal path might turn out and change the total amount of probability accumulated if the UAV follows that path. Here I'll list the ones we must deal with.

1. Desired flight time

If the search region is very small, the UAV has 100% probability of detection (say we are searching in the Bonneville Salt Flats), and you have plenty of UAV flight time to completely comb the area many times, then life gets easier and you can be pretty sure that you will spot the missing person if the UAV follows a lawnmower or Zamboni flight pattern (assuming the missing person stays in a fixed location). If the search region is very large and you have a very short UAV flight time, then maybe there are areas you simple can never reach given the short flight duration. Remember the 2-minute shopping spree vs. a 1-minute one?

2. Starting position and possibly the ending position

If the UAV starts from the middle of the search region, the optimal path will most definitely not be the same as the one when the UAV starts from the edge of the search region. And if the UAV must return to a desired ending position (maybe for easy retrieval or returning to command base), time must be allocated for the UAV for the return flight. Ideally, while flying back, the UAV should still try to cover areas with high probabilities. In the example of the shopping spree case, if you are required to start from the cashier's lane and also return there before time runs out, you probably still want to grab things on your way back, maybe choosing a different route back.

3. Type of UAV (fix-wing vs. copter)

A fix-wing UAV must keep moving in the air in order to get enough lift to maintain airborne. It also cannot fly backward. But a copter type UAV doesn't have these restrictions. It can hover over a spot or fly backward anytime it wants. Therefore, the type of UAV we use can really change how the optimal path looks like. Remember the shopping cart that was tinkered with in your shopping experience?


4. Task difficulty (probability of detection)

Although the UAV provides the bird's eye view of the search area, some times we look but we don't see. Maybe because the dense vegetation makes spotting the missing person a very difficult task; maybe the weather is really bad with lowers the visibility; maybe the missing person is wearing a green jacket that blends in with the surroundings. This means the probability of detection might vary from case to case and search area to search area. When the probability of detection is low, maybe we should send the UAV to cover the area multiple times so we can search better. This factor really adds complexity to the evaluation of a UAV's path's efficiency. When it takes 30 seconds for the Walmart employee to unlock everything and get that 5 Carat diamond ring for you, is it worth the wait? Or maybe grabbing all those unlocked watches at $50 a piece in the neighboring section sounds like a better idea now?

Given all these complicated factors, I still need to find out how well my path-planning algorithm is performing in different search scenarios. In the following blog posts in this series, I'll go through each factor and discuss how we can reasonably evaluate the efficiency of a search path without knowing the optimal solution.


Video of the Day:

A video my friend Bev made when I showed her around the BYU campus!

Saturday, April 11, 2009

AI Robot Related Conferences and Journals For My Research (Part 4)

AI Robot Related Conferences and Journals For My Research Part 3

Top Conferences
==================================================================

RSS -- Robotics: Science and Systems Conference

RSS is a single-track conference held annually that brings together researchers working on algorithmic or mathematical foundations of robotics, robotics applications, and analysis of robotics systems. The very low average acceptance rate of 25% makes the conference a very selective one. Accepted papers cover a wide range of topics such as kinematics/dynamic control, planning/algorithms, manipulation, human-robot interaction, robot perception, estimation and learning for robotic systems, and etc. One thing great about this conference is that all proceedings are available online for free

RSS is also a relatively new conference. The first ever RSS was held in 2005. However, the conference is growing quickly, attracting lead researchers in the robotics community with an expected attendance of over 400 for the next RSS conference. The conference also includes several workshops and tutorials. I have not submitted anything to the RSS conference in the past. It would be really nice if I could get a paper published here.

The next RSS conference RSS 2012 will be held at Sydney, Australia.
Conference Dates: June 27-July 1, 2012 (Roughly)
Submission Deadline: January 17, 2012 (Roughly)



SMC -- IEEE International Conference on Systems, Man, and Cybernetics

The SMC conference is a multi-track conference held annually. It provides an international forum for researchers and practitioners to report the latest innovations,
summarize the state-of-the-art, and exchange ideas and advances in all aspects of systems engineering, human-machine systems, and emerging cybernetics. Wikipedia defines the word Cybernetics as "the interdisciplinary study of the structure of regulatory systems." Cybernetics is closely related to information theory, control theory and systems theory.


The SMC conference is sponsored by the Systems, Man, and Cybernetics Society, whose mission is: "... to serve the interests of its members and the community at large by promoting the theory, practice, and interdisciplinary aspects of systems science and engineering, human-machine systems, and cybernetics. It is accomplished through conferences, publications, and other activities that contribute to the professional needs of its members."

My interest in the conference lies in the human-machine systems track, especially under the topics of adjustable autonomy, human centered design, and human-robot interaction. This would be a good place to publish research related to UAV (Unmanned Aerial Vehicle) and search and rescue robotics.

I have never submitted anything to this conference before and I can't find any information on the acceptance rate for the conference. But one thing for sure, this is not one of those "come and greet" conferences and all papers submitted go through a serious peer-review process.

The next SMC conference SMC 2011 will be held at Anchorage, Alaska, USA.
Conference Dates: October 9-12, 2011
Submission Deadline: April 1, 2011

The next SMC conference you can submit paper to is SMC 2012, which will be held in Seoul, Korea.
Conference Dates: October 7-10, 2012
Submission Deadline: April 1, 2012 (Roughly)

AI Robot Related Conferences and Journals For My Research Part 5






Why is every day so short? Wouldn't it be nice if we don't have to sleep?



Friday, April 10, 2009

How to find all the modes of a 3D probability distribution surface

A 3D probability distribution surface can represent the likelihood of certain events in a specific region where a higher point on the surface could mean it is more likely for the event to happen. For example, a 3D probability distribution surface created for a Wilderness Search and Rescue (WiSAR) operation, whether systematically or manually or with a hybrid, can show the searchers areas where it is more likely to find the missing person. The distribution map can be used to better allocate search resources and to generate flight paths for an Unmanned Aerial Vehicle (UAV).
An example 3D probability distribution surface
Because different path-planning algorithms may be better suited for different probability distributions (I appeal to the No-Free-Lunch theorem), identifying the type of distribution beforehand can help us decide what algorithm to use for the path-planning task. In our decision process, we particularly care about how many modes the probability distribution has. So how can we automatically identify all the modes in a 3D probability distribution surface? Here I'll describe the algorithm we used.

In our case, the 3D probability distribution surface is represented by a matrix/table where each value represents the height of the point. You can think of this distribution as a gray-scale image where the gray value of each pixel represent the height of the point. And we use a Local Hill Climbing type algorithm with 8-connected neighbors.

1. Down sample the distribution
If the distribution map is very large, it might be a good idea to down sample the distribution to improve algorithm speed. We assume the surface is noise-free. If the surface is noisy, we can also smooth it with a Gaussian filter (think image processing).

2. Check for a uniform distribution (a flat surface)
It is a good idea to check if the probability distribution is a uniform distribution. Just check to see if all values in the matrix are identical. If a uniform distribution is identified, we know the distribution has 0 mode and we are done.

3. Local Hill Climbing with Memory
Start from the a point of the surface and then check its neighbors (8-connected). As soon as a neighbor with the same or better value is found, we "climb" to that point. The process is repeated until we reach a point (hilltop) where all neighbors have smaller values. As we "climb" and check neighbors, we mark all the points we visited along the way. And when we check neighbors, we only check points we have not visited before. This way we avoid finding a mode we had found before. Once we find a "mode", we can start from another unvisited point on the surface and do another Local Hill Climbing. Here I use quotes around the word mode because we are not sure if the "mode" we found is a real mode.

4. Make sure the "mode" we found is a real mode
An Even-Height Great Wall
The "mode" we found using Local Hill Climbing might not actually be a real mode. It might be right next to a mode previously found and have a lower value (because we only checked unvisited neighbors in the previous step). It might also be part of another flat-surface mode where the mode consists of multiple points with identical values (think of a hilltop that looks like a plateau or think of a ridge). Things get even more complicated with special distributions such as this one on the right. And the "mode" point we found might be connected to a previously found mode through other points with the same value (e.g, the "mode" point is the end point of the short branch in the middle of the image.

Therefore, we need to keep track of all points leading to the final "mode" point that have identical values and check all the visited neighbors of these points, making sure this flat surface is not part of a previously found mode. If these points make up a real new mode, we mark these points with a unique mode count id (e.g, mode 3). If they are only part of a previous found mode, we mark these points so (e.g., mode 2). If one of them is right next to a previously found mode but have lower value, we mark these points as non-mode points. This step is almost like performing a Connected-Component Labeling operation in Computer Vision.

At the end of the algorithm run, we will have a count of how many modes the probability distribution has and also a map with all the mode points marked. With the Even-Height Great Wall distribution, the map would look just like the image (white pixels marking mode points) with 1 mode. And within Milli-seconds, the algorithm can identify the 4 modes in the example 3D surface above.

That's it! If you ever need to do this for your projects, you now know how!








Recursive functions work great for local hill climbing until you get a stack overflow.

Thursday, April 09, 2009

AI Robot Related Conferences and Journals For My Research (Part 3)

AI Robot Related Conferences and Journals For My Research Part 2

Top Conferences
==================================================================

IROS -- IEEE/RSJ International Conference on Intelligent Robots and Systems

IROS is a multi-track conference held annually. It is a premiere international conference in robotics and intelligent systems, bringing together an international community of researchers, educators and practitioners in the field to discuss the latest advancements in robotics and intelligent systems.

Every year, thousands of people all over the world attend the IROS conference and a large number of papers get published in the conference. For example, the IROS 2011 conference received 2541 papers and proposals and accepted 790 papers, with a 32% acceptance rate. However, the acceptance rate for IROS is normally much higher at around 55%. Every year the IROS has a different theme. The theme for IROS 2011 is Human-Centered Robotics, and the theme for IROS 2012 is Robotics for Quality of Life and Sustainable Development. However, people generally ignore the theme and submit whatever they have. I have been fortunate enough to attend the IROS conference in 2009 and published a paper on UAV path planning there.

The next IROS conference IROS 2011 will be held at San Francisco, California, USA.
Conference Dates: September 25-30, 2011
Submission Deadline: March 28, 2011

The next IROS conference you can submit a paper to is IROS 2012. It will be held at Vilamoura, Algarve, Portual.
Conference Dates: October 7-10, 2012
Submission Deadline: March 10, 2012



ICRA -- IEEE International Conference on Robotics and Automation

ICRA is a multi-track premiere conference in the robotics community held annually. It is in the same league with IROS and is also a major international conference with large attendance. The ICRA 2011 conference held in Shanghai, China welcomed more than 1,500 people around the world. The acceptance rate for ICRA is about 45%.

ICRA also has yearly themes. The ICRA 2011 conference's theme was "Better Robots, Better Life". The ICRA 2012 conference theme will be "Robots and Automation: Innovation for Tomorrow's Needs". Again, if you are thinking about submitting something to ICRA, don't worry about the themes. Just submit what you have on whatever topic, as long as it is related to robots or automation.

I have submitted a paper to ICRA before, but very unfortunately, the paper fell into the hands of several electrical engineer reviewers because I picked the wrong key words. They seem to hold grudges against computer science researchers. The same paper was accepted at IROS without any major revision. It is likely that I'll be submitting to ICRA again in the future, but I will be super careful about what key words to use this time!!

The next ICRA conference ICRA 2012 will be held at St. Paul, Minnesota, USA.
Conference Dates: May 14-18, 2012
Submission Deadline: September 16, 2011

AI Robot Related Conferences and Journals For My Research Part 4


Video of the Day:

This is why the back of my jersey has Messi's name on it!

Tuesday, April 07, 2009

AI Robot Related Conferences and Journals For My Research (Part 2)

AI Robot Related Conferences and Journals For My Research Part 1

Top Conferences
==================================================================

HRI -- ACM/IEEE International Conference on Human-Robot Interaction

HRI is a single-track, highly selective annual international conference that seeks to showcase the very best interdisciplinary and multidisciplinary research in human-robot interaction with roots in social psychology, cognitive science, HCI, human factors, artificial intelligence, robotics, organizational behavior, anthropology and many more. HRI is a relatively new and small conference because the HRI field is relatively new. The 1st HRI conference was actually held here in Salt Lake City, Utah, in March 2006, and my advisor, Dr. Michael A. Goodrich, was the General Chair for the conference. It is very unfortunate that I only started grad school two months after the 1st HRI conference and missed this great opportunity. *Sigh* HRI has been growing rapidly and gaining attentions from many research groups and institutions. The last HRI conference (6th) had attendance exceeding 300 participants. HRI is also a top-tier conference with an acceptance rate between 19% and 25%. As the conference becomes more and more popular, researchers from many research disciplines (e.g., human factors, cognitive science, psychology, linguistics, etc.) began participating in the conference.

The venue of the HRI conference rotates among North America, Europe, and Asia. I have been lucky enough to attend the conference twice, once in 2010 and once in 2011. In 2010, I attended the HRI Young Pioneer Workshop. The workshop is a great event because you not only get to make friends with a bunch of young researchers in the HRI field before the conference starts, you also get to see what other young researchers are working on. Besides, NSF is generous enough to cover a good portion of the airfare, which is great help for poor grad students. I liked the workshop so much that I joined the organizing committee for the next year's HRI Young Pioneer Workshop, and also hosted the panel discussion at the workshop. That was also the reason why I was able to attend HRI 2011. Also in both HRI 2010 and HRI 2011, I guarded my advisor's late-breaking report poster sessions because he couldn't make it.

I have never submitted anything to the main HRI conference. Since this is the top conference in my research field, I'd like to publish something before I graduate.

The next HRI conference HRI 2012 (the 7th) will be held at Boston, Massachusetts, USA.
Conference Dates: March 5-8, 2012
Submission Deadline: September 9, 2011



CHI -- ACM/SIGCHI Human Factors in Computing Systems - the CHI Conference

CHI is considered the most prestigious conference in the field of human-computer interaction.It is a multi-track conference held annually. Because of the heavy interests and involvement from industry leaders, large tech companies such as Microsoft, Google, and Apple are frequent participants and organizers of the conference. CHI is a top-tier conference with acceptance rate between 20% and 25%.

Human-Computer Interaction is a broad field that includes both software and hardware. The goal of HCI is to develop methods, interfaces, interaction techniques to make computer devices more usable and receptive to the user's needs. These days computer devices could include a wide variety of things such as cell phones, tablets, game consoles, or gadgets. Thanks to the advancement of sensor technologies, a whole set of rich interaction techniques have emerged to work with gestures, device orientations, and motion of the device.

Many of the HCI design principles, interface designs, and interaction techniques are relevant in Human-Robot Interaction. After all, a robot must have some kind of computer embedded (whether tiny or full-size, whether one or multiple). In many HRI tasks, the human user could very well be interacting with the robot through a regular computer or a touch screen device (think tele-presence, for example). I have never attended the CHI conference before, but I have heard a lot about it from Dr. Dan Olsen at BYU because he was always some kind of chair in the CHI organizing committee. In fact, he'll be the paper chair in the next CHI conference.

The next CHI conference CHI 2012 will be held at Austin, Texas, USA.
Conference Dates: May 5-10, 2012
Submission Deadline: September 23, 2011

AI Robot Related Conferences and Journals For My Research Part 3





Every time when you clip your finger nails, think what you have achieved since you last clipped your finger nail. If you can remember what you did, then you have not wasted your life.



Sunday, April 05, 2009

Predictive Policing and Wilderness Search and Rescue -- Can AI Help?

Santa Cruz police officers arresting a woman at a location
flagged by a computer program as high risk for car burglaries.
(Photo Credit: ERICA GOODE)
I came across an article recently talking about how Santa Cruz police department has been testing a new method where they use computer programs to predict when and where crimes are likely to happen and then send cops to that area for proactive policing. Although the movie "Minority Report" by Tom Cruise immediately came to my mind, that was actually not the same. The computer program was developed by a group of researchers consisting of two mathematicians, an anthropologist, and a criminologist. The program uses a mathematical model to read in crime data in the same area for the past 8 years and then predict time and location of areas with high probability for certain type of crimes. The program apparently can read in new data daily. This kind of program is attracting interests from law enforcement agencies because the agencies are getting a lot more calls for service when the number of staff is much less due to poor economy. This requires them to deploy resources more effectively. The article did not disclose much detail about the mathematical model (because it is a news article, not a research paper, duh), but it is probably safe to assume the model tries to identify patterns from past crime data and then assign probabilities to each grid cell (500 feet by 500 feet).

A multi-modal probability distribution predicting likely places
to find the missing person and a UAV path generated by algorithm.
I found this article especially interesting because in my research I am solving a similar problem with a similar approach. My research focuses on how an Unmanned Aerial Vehicle (UAV) can be used more efficiently and effectively by searchers and rescuers in wilderness search and rescue operations. One part of the problem is trying to predict where are likely places the lost person might be found. Another part of the problem is to generate an efficient path for the UAV so it will cover those high probability areas well with limited flight time. I've also developed a mathematical model (a Bayesian approach) that uses terrain features to predict the lost person's movement behavior, and also incorporate human behavior patterns from past data in the form of GPS track logs.

In both cases, the problems arise because of limited resources.

It is very important to remember that no one has the real crystal ball, so predictions can not be 100% correct. Also the prediction is in the form of a probability distribution, meaning in the long run (with many cases), the predictions are likely correct a good percentage of the time, but for each individual cases, the prediction could very possibly be wrong. This applies to both predictive policing and wilderness search and rescue.

Another important question to ask is how do you know your model is good and useful. This is difficult to answer because, again, we don't know the future. It is possible to pretend part of our past data is actually from "the future," but there are many metrics, what if the model performed well with respect to one metric, but performed terribly with respect to another metric? Which metric to use might be related to the individual case. For example, should number of arrests be used to measure the effectiveness of the system? Maybe by sending police officers to certain areas would scare off criminals and actually result in a reduction in number of arrests.

The predictive policing problem probably holds an advantage over the wilderness search and rescue problem because a lot more crimes are committed than people getting lost in the wilderness resulting in a much richer dataset. Also path planning for police offices is a multiagent problem while we only give the searchers and rescuers one UAV.

One problem with such predictive systems is that users might grow to fully rely on the system. This is an issue of Trust in Automation. Under-trust might waste resources, but over-trust might also lead to bad consequences. One thing to remember is that no matter how complicated the mathematical model is, it is still a simplified version of the reality. Therefore, calibrating the user's trust becomes an important issue and the user really need to know the strength of the AI system and also the weakness of the AI system. The product of the AI system should be complementary to the user to reduce the user's work and remind the user places that might be overlooked. The user also should be able to incorporate his/her domain knowledge/experience into the AI system to manage the autonomy. In my research, I am actually designing tools that allow users to take advantage of their expertise and manage autonomy at three different scales. I'll probably talk more about that in another blog post.

Anyway, it's good to see Artificial Intelligence used in yet another real life applications!







You shall not carry a brass knuckle in Texas because it is considered an illegal weapon (but in California you'll be just fine). Don't you love the complication of the US legal system, which by the way, serves big corporations really well.





Friday, April 03, 2009

AI Robot Related Conferences and Journals For My Research (Part 1)

Since my dissertation will be a paper-based dissertation, I need to publish a bunch of papers. My advisor has asked me to think about a schedule and a plan for where to submit my papers. There are many AI robot related conferences and journals out there. However, only some of them are quality ones. In this blog post I'll list some of the top ones, discuss what each conference is about, and identify paper submission deadlines. So if you are also thinking about publishing papers in the AI robot field, look no further. I've already done the homework for you.

Top Conferences
==================================================================

AAAI -- Association for the Advancement of Artificial Intelligence

AAAI is a top-tier multi-track conference held yearly. It is a prestigious conference with an acceptance rate roughly between 25% and 30%. A very wide range of AI topics are covered at the conference including multi-agent systems, machine learning, computer vision, knowledge representation and reasoning, natural language processing, search and planning, integrated intelligence, robotics, and etc. The conference also includes many tutorials, workshops, consortium, exhibitions, and competitions.

The AAAI conference is devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. It also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.

I have been fortunate enough to attend the AAAI conference twice (2007 in Vancouver, BC and 2010 in Atlanta, GA, USA) and published a paper at the 2010 conference under the Integrated Intelligence track. I was also invited to present a poster for the same paper. In that same conference, I watched and blogged about the Robot Chess Competition.

The next AAAI conference AAAI-12 (the 26th) will be held at Toronto, Ontario, Canada.
Conference Dates: July 22-26, 2012
Submission Deadline: February 8, 2012 (roughly)


IJCAI -- International Joint Conference on Artificial Intelligence

IJCAI is also a top-tier multi-track conference held biennially in odd-numbered years. The acceptance rate for this conference is roughly between 20% and 26%, which makes the it even more selective than many AI journals. It also covers a wide range of AI topics such as multiagent systems, uncertainly in AI, and robotics and vision. One difference between AAAI and IJCAI is that IJCAI is more of a real international conference with paper submissions from all over the world.

One thing great about this conference is that all proceedings of the papers are free to all from their web sites. They also provide video recordings of each session at the conference, so you don't have to be present at the conference and can still watch all the presentations as if you were really there. This is very rare for AI robotics conferences as far as I know and it is a wonderful service they are providing!!

I have not had a chance to submit anything to IJCAI. If possible, I'd really like to give it a try for the next one, and if you read the next line, you'll know why.

The next IJCAI conference IJCAI-13 (the 23rd) will be held at Beijing, China.
Conference Dates: August 3-9, 2013
Submission Deadline: January 25, 2013 (roughly)

AI Robot Related Conferences and Journals For My Research Part 2


Picture of the Day:

We found this very friendly black poodle wandering on the street, so we took her home while we looked for her owner. She was such a wonderful little thing and our entire family loved her. Luckily the owner saw our posts online and contacted us. My daughter was very sad to see her go, but we are very happy that she could finally go home! So if you have a pet, make sure you have a tag with your address and phone number. Could be real handy at times!

Friday, March 27, 2009

Kung Fu Tetris with Kinect and FAAST -- How To Tutorial

[Ignore the date stamp. I just have a lot of blogging to make up for...]

I love Kung Fu; I am very passionate about Artificial Intelligence; and I like playing the game of tetris. What happens if I put all three things together?

Here, I proudly present to you: Kung Fu Tetris!


If you can't view the video above, try this or this or this or download video here.

So what do you think? If you think this is fun and want to do it in your home, read on. It's probably much simpler than you expected. In this blog post I will explain to you step-by-step how you can set this up yourself. Everything is pretty much off-the-shelf, except a small configuration file, which you can download from my blog.

Required Components:

  1. The first thing you need is a Microsoft Kinect. Microsoft developed this depth-sensing device for Xbox game console. Thanks to the open source community for writing the drivers, now you can connect it directly to your computer and there's no need for an Xbox. You can buy Kinect from local electronics stores or order it from Amazon for $139. And if you are a student like me, you can get it shipped to you in two days for free.
     
  2. You also need to have a computer to connect Kinect too. It can be your desktop computer or your laptop computer, as long as it has a USB port. I used an Acer Aspire One netbook, which I bought for $179. Even with a netbook's slow processor and limited memory, Kinect runs just fine.
     
  3. It also helps if you have a large TV/monitor, so you can see the game better while not standing next to the monitor. Most large screen LCD TVs let's you connect your computer to it like it's an external monitor. I used a VGA to VGA cable (just like connecting to a regular LCD monitor) and set the LCD to RGB mode. Your miles may vary.
     
  4. Next thing you need is a bit of space in front of the TV/monitor. Because the game of Kung Fu Tetris requires the tracking of your full body, you have to stay a good distance away from Kinect so it can cover your entire body. Besides, I am sure you don't want to accidentally smash your nice TV with your fierce kicks. So a living room is a better environment than your study.
     
  5. You also need the tetris game to run on your computer. I just use free online flash version of the tetris game.
     
  6. In order to use Kinect with your computer, you need to install the Kinect driver and the following three Open Source Applications: OpenNI, NITE, and FAAST.
     
  7. Lastly, you need to create a small configuration file for the keyboard command and body gesture mapping. You can just download my version.
     

Step-By-Step Instructions:

1. Buy Kinect if you don't already have it. Amazon sells it for $130. No need to buy Xbox.
2. Connect your computer to a big monitor or TV.
3. Download and install the Kinect driver. Extract the msi file from the zip file and then double click the msi file to start the installation.
4. Download and install the latest version of OpenNI (NI stands for Natural Interaction). You can find the latest versions on this page. Unstable versions are just fine. The current latest 32-bit version v1.1.0.41 can be downloaded from this direct link. It's an msi file, so you can double click the file to install. Note that if you are running Windows 7, then you need the 64-bit version.
5. Download and install the latest version of PrimeSense NITE. You can find the latest versions on this page. Unstable versions are just fine. The current latest 32-bit version v1.3.1.5 can be downloaded from this direct link. Again, just double click the msi file to install. During NITE installation, use this free license key: 0KOIk2JeIBYClPWVnMoRKn5cdY4= when prompted.

6. Download and install the latest version of FAAST (Flexible Action and Articulated Skeleton Toolkit). You can find the latest version on this page. The current latest version 0.08 can be downloaded from this direct link. It's a zip file. All you have to do is to extract the zip file to a location on your local hard drive. Later, you just need to run the FAAST.exe file inside the folder. No other installation required.
7. Now plug the Kinect adapter into a power outlet.
8. Connect Kinect to your computer by plugging it into the USB port. You will be prompted to install three devices: Kinect Camera, Kinect Audio, and Kinect Motor. Since you have already installed the drivers, the system should automatically find the driver files for the installation. However if it fails to find the drivers, you can point to folder c:\Program Files\PrimeSense\SensorKinect\Driver\x86 (use \amd64 for 64-bit systems).
9. Sometimes the driver might not install Kinect Motor or Kinect Audio correctly. You can try the CL NUI Platform driver instead. The latest version can be found on this page. The current latest version v1.0.1210 can be downloaded from this direct link. The file is an exe file so you just have to double click to install. Drivers are installed to this folder c:\Program Files\Code Laboratories\CL NUI Platform\Driver.
10. Test if Kinect is working correctly by running the NiViewer program inside All Programs - OpenNI - Samples - NiViewer.
11. Open FAAST by running the FAAST.exe file. Click the Connect button to start the device. You should now see human shapes on screen.
12. Stand in front of the Kinect device and then hold a ‘Psi’ pose for several seconds until a stick figure appears, as shown in the image on the right.
13. Open a browser window and position it so it is side-by-side next to the FAAST application. Load the tetris game by going here (or here). I linked the flash file directly so you don't have to deal with the annoying flash ads on those web sites.
14. Right-click and then select save to download my configuration file from http://www.lannyland.com/download/KungFuTetris.cfg, and save it to a location you remember (such as your desktop).
14. In FAAST application, click Load button and then browse to where you saved the configuration file and load that file.
15. Click Start Emulator button, then select your tetris game so your browser is the active window. You might have to recalibrate by doing that ‘Psi’ pose again.
16. Start the tetris game and then start kicking. See if you can move the pieces. Remember the controls are: 1) front kicks rotate the pieces 2) side kicks move the pieces left or right 3) jump does fast drop.

Be aware:

1. Do your warm up routines before playing this game. I AM SERIOUS! Otherwise you risk injuring yourself.
2. Don't stand too close to anybody/thing, because you might kick that body/thing and cause damage to him/her/it.
3. Kick with good speed and good form, otherwise weird things might happen.
4. Jump sometimes doesn't work too well. Just jump more. It's good for your heart.

That's it! Leave some comments if you find this helpful. Hope you get it working and start kicking! Enjoy!!


Disclaimer: I will not be held responsible if you
1) smash your TV/monitor with your fierce kicks,
2) injure yourself because of excessive or improper kicking,
3) become so addicted that you stop doing your share of the housework and irritate your better half, or
4) develop a habitual involuntary kicking syndrome and find yourself always throwing kicks at people near you.

By the way, the Chinese character on the back of my t-shirt is Tao, as in Taoism, meaning the way of life. So here's the Tao of the day:







Workout should be fun and enjoyable instead of torturous.
And playing tetris can be productive too!






Videos of the Day:

I thought these two videos are very appropriate for today's Videos of the Day! You really have to finish watching the first video to really appreciate the humor in the second one.

The original Wii Fit Ad

The Wii Fit Parody

Wednesday, February 11, 2009

AI and Robots: Hybrid Video Display of Visible and Infrared Might Help Search and Rescue

A recent article in New Scientist discussed a research project performed at Brigham Young University where the effect of combining visible light and infrared on real-time video footage in Wilderness Search and Rescue was evaluated.

The research project was directed by Dr. Bryan Morse (who is also one of my committee members) and implemented by Nathan Rasmussen (a friend of mine, who successfully received his MS from this project and graduated in 2009). It is one of the many projects in the WiSAR research group at BYU that works on how to use mini-UAVs (Unmanned Aerial Vehicles) to support Wilderness Search and Rescue. The picture on the right shows Nathan throw-launching a UAV in a field trial at Elberta, Utah.

This research focuses on the human-robot interaction aspect and try to determine which method of display works better for human operators: displaying visible light video side by side with infrared video, or combine both in a hybrid display.

The UAV used in the experiments can already carry both a visual spectrum camera and an infrared camera (BTW: very expensive). Visible light video footage can be useful in spotting objects of unnatural/irregular shapes and colors (top view). Infrared light video footage, on the other hand, can be helpful in detecting objects with distinct heat signatures that are different from surrounding environments (especially early mornings, evenings, and nights, or in cold weathers where heat signatures are more distinct).

In order to align footage from both sensors, a calibration grid was created with black wires on a white background. To allow the infrared camera to "see" the grid, an electricity current was sent down the wires to heat them up. An algorithm is then used to align the vertices of the two grids to compensate for the slightly different viewing angle.
Once the hybrid view was possible, a user study was performed where students were used as test subjects to watch UAV videos in both methods and tried to identify suspicious objects while listening to audio signals (counting beeping sounds as a secondary task in order to measure mental workload). I happen to be one of the test subjects, and my hard work earned me some delicious chocolates.

Experiment results show that people who viewed the hybrid display performed much better in the secondary task of counting beeps. This suggests that the hybrid video is easier to interpret (requiring less mental work) and would allow the searcher to focus more on identifying objects from the fast moving video stream.

The research was presented at the Applications of Computer Vision conference in Snowbird, Utah, in December 2009. If you are interested in more details about this research, you can read Nathan's thesis (warning: 22.4MB).

Picture of the Day:


Beautiful dusk sunshine mountain view from my house!

Thursday, February 05, 2009

My Research: BYU UAV Demo for Utah County Search and Rescue Team

On November 21, 2009, our research group, WiSAR (Wilderness Search and Rescue) demonstrated our UAV technologies to the Utah County Search and Rescue team representatives at Elberta, Utah. Three search and rescue personnel participated in the demo and one of them flew the UAV in a simulated search and rescue exercise.

In two previous blog postings I described BYU research on using UAV to support Wilderness Search and Rescue and UAV capabilities:

My Research: BYU UAV Demo Dry Run
Robot of the Day: UAVs at BYU

The demo was scheduled at 8:30-11:00 am at Elberta, Utah (in the middle of nowhere), which was about an hour's drive from BYU campus. That meant we had to get there by 8 to set up and test equipments. The previous day's weather forecast predicted snow shower, so I was assigned the task of picking up some hot chocolate from the BYU cafeteria so people don't freeze to death!

Despite the facts that I had to deal with my 10-month old son's high fever at 1:30am and not really fall asleep until 3:30am and unconsciously turned off my alarm clock, I actually made it to the cafeteria only 5 minutes late, then I waited another 25 minutes because they haven't made the hot chocolate yet. By the time I arrived at the demo site at 8:30am, turned out the trailer just got there also, so I didn't miss anything! Also, turned out the weather forecast was way off, there was no snow at all, and it was going to be a great day!


Left to right, top to bottom: 1. BYU Cafeteria 2. Beautiful Utah mountains at Dawn
3. The lonely freeway 4. Driving down the highway 5. Good morning, Cows!
6. Gravel road with the destination in view (the ridge in the far distance).


The pictures above were taken by me using an android phone running NASA's GeoCam mobile client. Therefore, all photos were geo-tagged with GPS locations and camera orientation. You can actually view them from Google Earth, where you'll see the exact route I took on the map. Just download the zip file, unzip, and then double click the kml file.

Viewing pictures from Google Earth

The goal of the demo is to show real search and rescue workers how easy and useful our UAV technologies are in support of search and rescue operations. A simulated search and rescue mission was set up, a member of the search and rescue team had to fly the UAV using our interface and locate the simulated missing person (a dummy placed in the wilderness). Students and professors from BYU also acted as aerial video analysts and ground searchers to assist the simulated search. The picture blow shows a ground searcher scouting around in the distance searching for the missing person. The ground searchers always wear bright-colored vests so they can be easily spotted by others (e.g. from the aerial videos) and don't get shot at by hunters. (I know, research is a dangerous profession!)


Ground searcher in a distance (click photo to enlarge)


After setting up everything, Ron Zeeman, a member of the Utah County Search and Rescue team, test flew the UAV, and completed a test drill (launch, manual control, fixed pattern flying, and landing).

Left to right: 1. People busy setting things up 2. UAV at dawn 3. Last minute exercise

After other Search and Rescue team members arrived, we explained how our UAV works, and then started the simulated search and rescue mission. This time I was quite lucky to catch the flying UAV with my camera.

Left to right: 1. Two more professional searchers arrived 2. Two retired UAVs in display 3. The show is on now!

Left to right: 1 and 2. UAV in the air 3. UAV loitering above area of interest (click to enlarge)

Left to right: 1. The kind of junk people would dump to the middle of nowhere 2. Debris of camp fire 3. Real-time video mosaicing (frame stitching)

Eventually, the missing person was located in the aerial video and confirmed by ground searchers. "Unfortunately", by the time we found "him", he was not breathing.


The "missing person" was found, breathless.

Technologies demonstrated include auto launch, auto land, various UAV control mode (carrot and stick, fixed pattern flying, etc.), integrated gimballed camera view in augmented virtuality, click and point gimballed camera control (separate from UAV path), real-time video mosaicing, real-time video annotation and video zooming/scrubbing, point of interest communication between video GUI and UAV control GUI.

Technologies not demonstrated but are work in progress include automatic missing person probability distribution generation, automatic path planning (based on distribution), see-ability metric to measure coverage quality, and automatic anomaly detection.

The demo was a great success! The professional searchers were pleasantly surprised by the ease of operating the UAV and the usefulness of the aerial video support. Their comments included, "That was so cool!" "This could be very helpful!"




Video of the Day:

If the UAV sitting in our lab had a mind of its own,
it would have been singing this all night long...