The book contains 20 short, true stories of how design errors in various technologies led to terrible disasters, often resulting in the loss of many lives. Among them were the shut off handle on a command module capsule that caused the death of three Russian astronauts because it takes too long to turn, the control lever for autopilot vs. manual control caused a supertanker to hit a rock and leaking millions gallons of oil into the ocean because the captain slipped it into an unintended third control mode in panic, the Airbus A320 plane that crashed in an air show demo, killing many passengers, because the pilot was over-confident with the plane's autopilot, and a ferry ship that capsized because the captain didn't know the bow cargo doors were not closed when the ship set off. The key message the author tries to get across is that designers of technology MUST take into consideration human factors, especially possible human errors and capability limitations in tense and nervous situations. Learning from mistakes might be too costly because Bad Designs Kill.
The title of the book comes from the name of the first story in the collection. On March 23, 1986, Ray Cox, a patient in his 30s undergoing treatment to have a tumor removed from his back, was taking his ninth regular treatment with the Therac 25 machine. The Therac 25 is a highly sophisticated machine that's capable of using high-energy radiation to hit cancer cells to any point on or in a person's body with pinpoint accuracy. The machine can operate in two modes: the high-power "x-ray" mode and the lower-power "electron beam" mode. What Ray was to receive would be the lower-power "electron beam" mode. He would not feel a thing. When Mary Beth, the radiotherapy technician, started the procedure in the control room (a different room), she mistakenly typed "x", the command to use the "x-ray" mode. Noticing her mistake, she quickly moved the cursor back and used the "edit" function to change it to command "e", the command to use the "electron beam" mode. She had no idea that her quick sequence of keystrokes within 8 seconds was something the machine had never been run under before. The machine retracted the think metal plate used during "x-ray" mode but left the power setting on maximum. When Mary entered the command to initiate the treatment, Ray saw a blue flash and felt as if he was hit by a lighting bolt. Back in the control room, a message popped up on the monitor with the error message, "Malfunction 54, treatment not initiated." Feeling quite puzzled, Mary re-entered the command to initiate the treatment. Ray was rolling and screaming in pain when he was struck the second time, and he began to call out loud for help. Soon the third shock struck, and Ray jumped from the table and ran to the door. Nobody at the hospital knew what was going on, and only after the same incident happened again to another patient did they realize something was seriously wrong with the machine. Instead of receiving 200 rads of radiation, Ray was shot with 25000 rads. In the next few months, tissues hit by the beams died, leaving massive lesions in Ray's upper body. "Captain Kirk forgot to put the machine on stun," said Ray Cox, trying to keep his humor. Four months later, Ray Cox died.
At least three things went terribly wrong in this tragic incident:
- The unexpected key sequence within the short time window should not have allowed the power setting to be left on maximum. The kind of operating mistake Mary made is typical human error and should have been expected and tested against.
- The error message should have been clearer, at least warning the operator that something had been seriously wrong (whether it is serious or not) and that the beams have already been shot. This would have prevented Mary from firing the beams again and again.
- A strict procedure should have been in place to make sure the patient undergoing treatment is been monitored real-time. This would also have spared Ray from the additional two shots (whether or not it might make a different of life and death in Ray's case).
As a researcher in AI and robotics, it is likely that I'll be designing advanced and complex systems to be used in real applications. While enjoying the thrill and fun of designing cool toys, it is also very important to always keep in mind the responsibilities we hold. Especially in the case of people working with automation. We should always take into consideration the kind of errors human might make and design accordingly to handle such situations accordingly. As automation and robots emerge in many aspects of people's lives (I am talking about more direct interactions here, not the kind of secluded factory settings), we have to be utterly careful and make sure people don't get injured or killed.
UAV used in our field trial |
Squaw Peak, Provo, Utah |
Video of the Day:
For only $4 a month, you can achieve peace of mind in a world full of crime and robots, with Old Glory Insurance!
Hilarious video! And a good point about a very serious subject.
ReplyDeleteHaving worked as a software engineer on medical systems, I know how difficult it can be to anticipate such problems in advance. Your point about it being better to allow the system to fail than to injure people is spot on.