Well, the admins at ShopperMagic not only felt so, they actually did so.
We decided to give a web server early retirement in a manner that allowed us to "feel good" because it had kept support staff up for many nights trying to sort it out. The server gets a reprogramming it will never forget by stuffing it with fireworks, lighting the blue touch paper, and retiring to a safe distance...
And if you prefer no fire/smoke for potential fire hazard, you could also just let nature (gravity) take its course.
When machines don't deliver the performances they are expected to provide, the frustration can really build up for the user. And when the level of frustration exceeds a threshold, sometimes it turns into violent behaviors. Many of you probably remember this famous video below from several years back:
And of course, who could ever forget this classic scene from the movie "Office Space":
Suppose this machine you are frustrated with is not a computer or printer, but a robot. With the current state of robotic technology and the complexity of tasks, end users are probably more likely to get frustrated with "intelligent" machines/robots such as this one:
Here's another example of someone getting really frustrated with a robot:
By the way, the robot can be just as frustrated.
So in cases like these, as the frustration builds up, would you still beat up the robot?
Or kick it?
Or blow it up?
With the first few videos, most people probably would find them hilarious despite the violence involved. However, with the last two videos, don't you feel something is not quite right here? Something...well...maybe something immoral that makes you uncomfortable? If so, why is that?
Maybe it's the human form that bothered you? Maybe it's the animal kind of behavior? Or the level of intelligence displayed? This reminds me of something I read a long, long time ago. I don't remember who said it, when, and where. It was a conversation about what kind of animals one would eat. And the answer was, "If it talks back to me, then I won't eat it." Here a simple metric of language and communication capability is used to classify whether an animal is intelligent enough. And if it is intelligent enough to talk back, then it would feel immoral to treat it as food. Of course, there are many metrics we can use to judge intelligence. So once we classify a robot as intelligent, would you feel immoral to hit it, or treat it like a mere lifeless machine?
This sounds like a very dangerous territory in robotic research, but it is a problem we'll eventually have to face (maybe even not very far from now). So is there something we can do as a designer (not a lawyer or legislator) to address such issues? Should we make the robot appear/sound very machine-like or appear/act dumb to alleviate our moral guilt? Or maybe make them more human-like to amplify it, instead? I don't have the answer. Do you?
If you have a hard time falling asleep, try reading a Bayesian statistics textbook.
What if you always feel sleepy? do some pushups? Any book suggestion?
ReplyDelete