Search within Lanny's blog:

Leave me comments so I know people are actually reading my blogs! Thanks!

Sunday, April 05, 2009

Predictive Policing and Wilderness Search and Rescue -- Can AI Help?

Santa Cruz police officers arresting a woman at a location
flagged by a computer program as high risk for car burglaries.
(Photo Credit: ERICA GOODE)
I came across an article recently talking about how Santa Cruz police department has been testing a new method where they use computer programs to predict when and where crimes are likely to happen and then send cops to that area for proactive policing. Although the movie "Minority Report" by Tom Cruise immediately came to my mind, that was actually not the same. The computer program was developed by a group of researchers consisting of two mathematicians, an anthropologist, and a criminologist. The program uses a mathematical model to read in crime data in the same area for the past 8 years and then predict time and location of areas with high probability for certain type of crimes. The program apparently can read in new data daily. This kind of program is attracting interests from law enforcement agencies because the agencies are getting a lot more calls for service when the number of staff is much less due to poor economy. This requires them to deploy resources more effectively. The article did not disclose much detail about the mathematical model (because it is a news article, not a research paper, duh), but it is probably safe to assume the model tries to identify patterns from past crime data and then assign probabilities to each grid cell (500 feet by 500 feet).

A multi-modal probability distribution predicting likely places
to find the missing person and a UAV path generated by algorithm.
I found this article especially interesting because in my research I am solving a similar problem with a similar approach. My research focuses on how an Unmanned Aerial Vehicle (UAV) can be used more efficiently and effectively by searchers and rescuers in wilderness search and rescue operations. One part of the problem is trying to predict where are likely places the lost person might be found. Another part of the problem is to generate an efficient path for the UAV so it will cover those high probability areas well with limited flight time. I've also developed a mathematical model (a Bayesian approach) that uses terrain features to predict the lost person's movement behavior, and also incorporate human behavior patterns from past data in the form of GPS track logs.

In both cases, the problems arise because of limited resources.

It is very important to remember that no one has the real crystal ball, so predictions can not be 100% correct. Also the prediction is in the form of a probability distribution, meaning in the long run (with many cases), the predictions are likely correct a good percentage of the time, but for each individual cases, the prediction could very possibly be wrong. This applies to both predictive policing and wilderness search and rescue.

Another important question to ask is how do you know your model is good and useful. This is difficult to answer because, again, we don't know the future. It is possible to pretend part of our past data is actually from "the future," but there are many metrics, what if the model performed well with respect to one metric, but performed terribly with respect to another metric? Which metric to use might be related to the individual case. For example, should number of arrests be used to measure the effectiveness of the system? Maybe by sending police officers to certain areas would scare off criminals and actually result in a reduction in number of arrests.

The predictive policing problem probably holds an advantage over the wilderness search and rescue problem because a lot more crimes are committed than people getting lost in the wilderness resulting in a much richer dataset. Also path planning for police offices is a multiagent problem while we only give the searchers and rescuers one UAV.

One problem with such predictive systems is that users might grow to fully rely on the system. This is an issue of Trust in Automation. Under-trust might waste resources, but over-trust might also lead to bad consequences. One thing to remember is that no matter how complicated the mathematical model is, it is still a simplified version of the reality. Therefore, calibrating the user's trust becomes an important issue and the user really need to know the strength of the AI system and also the weakness of the AI system. The product of the AI system should be complementary to the user to reduce the user's work and remind the user places that might be overlooked. The user also should be able to incorporate his/her domain knowledge/experience into the AI system to manage the autonomy. In my research, I am actually designing tools that allow users to take advantage of their expertise and manage autonomy at three different scales. I'll probably talk more about that in another blog post.

Anyway, it's good to see Artificial Intelligence used in yet another real life applications!

You shall not carry a brass knuckle in Texas because it is considered an illegal weapon (but in California you'll be just fine). Don't you love the complication of the US legal system, which by the way, serves big corporations really well.