Search within Lanny's blog:

Leave me comments so I know people are actually reading my blogs! Thanks!

Wednesday, February 11, 2009

AI and Robots: Hybrid Video Display of Visible and Infrared Might Help Search and Rescue

A recent article in New Scientist discussed a research project performed at Brigham Young University where the effect of combining visible light and infrared on real-time video footage in Wilderness Search and Rescue was evaluated.

The research project was directed by Dr. Bryan Morse (who is also one of my committee members) and implemented by Nathan Rasmussen (a friend of mine, who successfully received his MS from this project and graduated in 2009). It is one of the many projects in the WiSAR research group at BYU that works on how to use mini-UAVs (Unmanned Aerial Vehicles) to support Wilderness Search and Rescue. The picture on the right shows Nathan throw-launching a UAV in a field trial at Elberta, Utah.

This research focuses on the human-robot interaction aspect and try to determine which method of display works better for human operators: displaying visible light video side by side with infrared video, or combine both in a hybrid display.

The UAV used in the experiments can already carry both a visual spectrum camera and an infrared camera (BTW: very expensive). Visible light video footage can be useful in spotting objects of unnatural/irregular shapes and colors (top view). Infrared light video footage, on the other hand, can be helpful in detecting objects with distinct heat signatures that are different from surrounding environments (especially early mornings, evenings, and nights, or in cold weathers where heat signatures are more distinct).

In order to align footage from both sensors, a calibration grid was created with black wires on a white background. To allow the infrared camera to "see" the grid, an electricity current was sent down the wires to heat them up. An algorithm is then used to align the vertices of the two grids to compensate for the slightly different viewing angle.
Once the hybrid view was possible, a user study was performed where students were used as test subjects to watch UAV videos in both methods and tried to identify suspicious objects while listening to audio signals (counting beeping sounds as a secondary task in order to measure mental workload). I happen to be one of the test subjects, and my hard work earned me some delicious chocolates.

Experiment results show that people who viewed the hybrid display performed much better in the secondary task of counting beeps. This suggests that the hybrid video is easier to interpret (requiring less mental work) and would allow the searcher to focus more on identifying objects from the fast moving video stream.

The research was presented at the Applications of Computer Vision conference in Snowbird, Utah, in December 2009. If you are interested in more details about this research, you can read Nathan's thesis (warning: 22.4MB).

Picture of the Day:

Beautiful dusk sunshine mountain view from my house!