Santa Cruz police officers arresting a woman at a location flagged by a computer program as high risk for car burglaries. (Photo Credit: ERICA GOODE) |
A multi-modal probability distribution predicting likely places to find the missing person and a UAV path generated by algorithm. |
In both cases, the problems arise because of limited resources.
It is very important to remember that no one has the real crystal ball, so predictions can not be 100% correct. Also the prediction is in the form of a probability distribution, meaning in the long run (with many cases), the predictions are likely correct a good percentage of the time, but for each individual cases, the prediction could very possibly be wrong. This applies to both predictive policing and wilderness search and rescue.
Another important question to ask is how do you know your model is good and useful. This is difficult to answer because, again, we don't know the future. It is possible to pretend part of our past data is actually from "the future," but there are many metrics, what if the model performed well with respect to one metric, but performed terribly with respect to another metric? Which metric to use might be related to the individual case. For example, should number of arrests be used to measure the effectiveness of the system? Maybe by sending police officers to certain areas would scare off criminals and actually result in a reduction in number of arrests.
The predictive policing problem probably holds an advantage over the wilderness search and rescue problem because a lot more crimes are committed than people getting lost in the wilderness resulting in a much richer dataset. Also path planning for police offices is a multiagent problem while we only give the searchers and rescuers one UAV.
One problem with such predictive systems is that users might grow to fully rely on the system. This is an issue of Trust in Automation. Under-trust might waste resources, but over-trust might also lead to bad consequences. One thing to remember is that no matter how complicated the mathematical model is, it is still a simplified version of the reality. Therefore, calibrating the user's trust becomes an important issue and the user really need to know the strength of the AI system and also the weakness of the AI system. The product of the AI system should be complementary to the user to reduce the user's work and remind the user places that might be overlooked. The user also should be able to incorporate his/her domain knowledge/experience into the AI system to manage the autonomy. In my research, I am actually designing tools that allow users to take advantage of their expertise and manage autonomy at three different scales. I'll probably talk more about that in another blog post.
Anyway, it's good to see Artificial Intelligence used in yet another real life applications!
You shall not carry a brass knuckle in Texas because it is considered an illegal weapon (but in California you'll be just fine). Don't you love the complication of the US legal system, which by the way, serves big corporations really well.
0 comments:
Post a Comment