In recent years, healthcare has been learning a lot of lessons from aviation — including checklists and “Crew Resource Management,” as written about in the excellent books The Checklist Manifesto and Why Hospitals Should Fly. Aviation has gotten much safer over the past few decades, due mainly to improvements in teamwork and human factors.
Two recent episodes, one from commercial aviation and one from a major Boston hospital, illustrate that neither industry has everything solved in terms of error proofing and designing systems that ensure quality.
First, the aviation story. The WSJ reported last week that the FAA ordered airlines to install a software update in 777 airplanes. I haven't heard anywhere near the outcry that we hear over the Toyota software problems.
…Federal Aviation Administration ordered the fix to prevent problems when the autopilot system is inadvertently on while a Boeing 777 aircraft is rolling down the runway just before takeoff.
This seems like a pretty serious problem if the autopilot can be turned on accidentally by the pilot. That isn't engineered out? This isn't error proofed? If the autopilot is accidentally engaged or isn't turned off after pre-flight testing, the plane might not get enough lift and the pilot would have to abort takeoff, leading to the possibility of the plane skidding off the runway. Boeing reassures us:
A Boeing spokeswoman said that since 1995, when the Boeing twin-engine 777 was introduced, the planes have made a total of 4.8 million flights without any injuries or accidents attributed to such autopilot issues.
Whew. But a lean thinker realizes that good results aren't enough — you also need a good process to ensure that the safety continues. Rather than just telling the pilots to be careful, changes are recommended:
Around the time of the January incidents, Boeing issued a service bulletin alerting airlines to install new autopilot software making it impossible for pilots to engage autopilots before takeoffs. Compliance with such bulletins is voluntary. Boeing called for the software changes to be completed within a year.
“Making it impossible” – now there's mistake proofing. But the changes are “voluntary”?? If you're on a 777 flight, better ask the flight crew to be careful, well if you're in first class near the cockpit. I assume their solution doesn't involve hanging a bunch of warning signs in the cockpit, as I've seen in many hospitals (who love their “be careful signs”). “Be careful” warnings, regardless of the number, are NOT the path to quality in any setting.
Now onto the healthcare example. The Boston Globe headline and sub-headline read:
“MGH death spurs review of patient monitors:
Heart alarm was off; device issues spotlight a growing national problem”
The lead of the article reads:
A Massachusetts General Hospital patient died last month after the alarm on a heart monitor was inadvertently left off, delaying the response of nurses and doctors to the patient's medical crisis.
Just as you might ask, “How can pilots leave the autopilot on for takeoff?”, we can ask how a heart alarm gets turned off, meaning it wasn't able to alert the medical staff when the patient was in trouble.
The immediate response is notable in that the hospital was looking at process and not looking to blame individuals:
Meyer said hospital administrators are not interested in assigning blame to individual staff members because that would be unfair and counterproductive in trying to encourage open reporting and discussion of problems. Rather, he said, hospital officials want to fix the underlying systemic issues with monitoring patients, which is why they disabled the alarms' off switches. In an e-mail to Mass. General employees Friday, president Peter Slavin praised staff for reporting the incident to hospital safety officials.
Now there's the right response (albeit it a reactive one). It's too bad the problem couldn't have been identified before a patient died (this is where the FMEA methodology might be helpful). We'd have to speculate and wonder if the problem hadn't been detected previous to this incident — were there near misses from similar alarm problems? We don't know, I guess.
The investigation led to two different responses:
- Inspect and disable the off switch on all 1100 monitors at MGH (good, but why wasn't this identified as a risk earlier?)
- Assign a nurse to each unit to specifically listen for alarms because “sometimes even functioning alarms can't be heard over the din of a busy ward.” (a workaround?)
I hope they are able to get to the root cause of the noise. Assigning a nurse to the alarms adds cost — is that the only way to solve this problem? I spent seven weeks in a nursing unit for a lean project once – a telemetry unit with heart monitors at the nursing station. There's constantly a din of beeping and pinging. There was a constant effort, for a number of reasons, to keep noise down in the unit – a battle that was hard to win. The beeping quickly becomes background noise that's easy to tune out… or it gets more annoying, depends on the person.
It sometimes reminded me of this classic scene from Airplane II with William Shatner:
httpv://www.youtube.com/watch?v=xmSSIBjgjqE\
OK, comedy break is over — this is serious stuff. More from the article:
George Mills, a senior engineer at the Joint Commission, said early in the past decade, inspectors sometimes found that hospital staffers were so overwhelmed by alarms, they were muzzling them with gauze and tape and otherwise blunting the noise. After an educational effort by the organization in 2005, employees stopped overriding alarms and manufacturers improved new machines, in part by making them harder to turn off, he said.
The Joint Commission has said, even with those changes, that the problem has been getting worse in recent years, leading to more awareness campaigns and education efforts.
I know this is hindsight vision, but it's a shame MGH couldn't have been proactive in knowing this was a risk. This is described as a “national problem,” leading to over 200 known deaths between 2002 and 2004. We can assume additional cases that weren't reported as errors or adverse events.
Numerous deaths have been reported because alarms malfunctioned or were turned off, ignored, or unheard.
So MGH is learning from the incident and they're reacting. What about other hospitals? Does each hospital need a patient death as a wakeup call to action? We should all hope not. MGH mentioned that it's a GE-made alarm. Does GE get involved like Boeing, to notify other customers of the systemic risk? Why didn't MGH react based on the incidents at other hospitals? GE wouldn't really comment in the article.
There are some other issues raised involving training or standardized work. From the article:
One possibility, Erickson said, is that someone turned off the switch during a previous patient crisis because they believed it would pause the alarm, not turn it off for good.
The article also highlights that equipment from different manufacturers work differently, so staff sometimes get confused. This is a problem that anesthesia has already solved – the two main manufacturers (the Coke and Pepsi of the industry) standardized the direction knobs turned to increase or decrease the amount of anesthesia, after mistakes were caused by the lack of standardization (clockwise on the “Coke” machine did the opposite of clock wise on the “Pepsi”).
MGH believes the alarm wasn't turned off because it was too noisy in the unit, as has happened in other hospitals. They have, though reacted by increasing the volume of alarms (which might further increase noise… leading to more instances of other alarms not being heard?? Or staff being more frustrated by noise???) and adding speakers. Again, this seems like more of a workaround than a root cause fix. There's also going to be more training.
Dr. Lucian Leape (who ripped medical schools in his recent report) asked the lean thinking question:
Dr. Lucian Leape, a specialist on medical safety at the Harvard School of Public Health, said one key question for manufacturers is why they would ever make a machine that allows hospital staff to turn off a critical alarm.
“Every piece of equipment we have has a failure rate, things go wrong,” he said. But “how come there are devices where this is possible? Why do you have a monitor you can turn off?
The MGH president, as with the no-blame response, has this much right:
Meyer said. “Our priority is to find out what happened, why did it happen, and what can we do to make sure it never happens again.”
Are other hospitals responding to make sure “it never happens” minus the “again”? To the hospital folks reading, what are you seeing here? How much can we ask organizations and people for in terms of being proactive? What do you think the “Lean solution” (or countermeasures) would entail?
What do you think? Please scroll down (or click) to post a comment. Or please share the post with your thoughts on LinkedIn – and follow me or connect with me there.
Did you like this post? Make sure you don't miss a post or podcast — Subscribe to get notified about posts via email daily or weekly.
Check out my latest book, The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation:
I just saw something on Fail Blog that reminded me of Dr. Lucian Leape’s question “why they would ever make a machine that allows hospital staff to turn off a critical alarm?”
http://failblog.org/2010/03/22/smoke-alarm-fail/
Totally ridiculous. I don’t understand why there is a process regarding turning off critical alarms in a hospital.
Surely you can’t save that much money doing it and they’ve already said the noise isn’t to blame.
I may have missed something but I don’t get why they were turned off in the first place…
I don’t know. From the article, it sounds like the alarm in question was accidentally not turned on… bad process instead of it being turned off due to annoyance?