By August 28, 2006 4 Comments Read More →

The System Failed, 49 Killed – Error in Comair crash fairly common

As a frequent flyer, this almost makes me physically sick to think about the systemic problems, the combinations of errors, and the multiple human errors that led to this horrible tragedy. Calling errors “Common” seems to point to things that are systemic rather than being individual-driven. Sure, a person (or people) made mistakes. But, we need to account for that in our system design. We really need lean thinking, real problem solving, and less blame. The system needs to be error proofed.

The USA Today headline above is the least “blame-y” of many I saw today. Was it strictly “pilot error?” Some headlines will blame him. Why were the broken runway lights not fixed?

“The planning discussions with air traffic controllers and the flight crew were about a takeoff from runway 22,” a 7,000-foot runway suited for jets at Lexington’s Blue Grass Airport, National Transportation Safety Board member Debbie Hersman said.

Instead, the Comair jet, bound for Atlanta on Sunday morning, took runway 26. That runway is half as long as runway 22 and was unlit because its runway lights were out of service, Hersman said in a media briefing.

The aviation system failed. Many things went wrong. Many mistakes were made. Why can’t we be more proactive and error proof this system? Will we see a headline that says “System Kills 49 Passengers?” Unlikely. Will we just blame a dead man and move on? Why don’t aircraft systems warn the pilot that he is literally pointed in the wrong direction? The planes have gyroscopes that can tell their compass position.

Why don’t we have ground radar systems (or sensors in the runways) that would warn and flag them “HEY, WRONG RUNWAY???”. I read articles that said “it was the pilot’s responsibility” to be on the correct runway. I’m all for personal responsibility, but when human lives are at stake, we need SYSTEMS to protect us from human error, well designed systems. We need a high-priority national effort to use FMEA and other proactive problem solving methods NOW. We need to rely on process and “creativity” over spending millions on high-tech solutions (not that high-tech can’t play a role). But high-tech takes time (see the slow rollout of ground radar). Process improvement can be immediate if we focus on the right things.

We need to do better.

Thanks for reading! I’d love to hear your thoughts. Please scroll down to post a comment. Click here to be notified about posts via email. Learn more about Mark Graban’s speaking, writing, and consulting.

Related Posts Plugin for WordPress, Blogger...

Mark Graban's passion is creating a better, safer, more cost effective healthcare system for patients and better workplaces for all. Mark is a consultant, author, and speaker in the "Lean healthcare" methodology. He is author of the Shingo Award-winning books Lean Hospitals and Healthcare Kaizen, as well as The Executive Guide to Healthcare Kaizen. His most recent project is an book titled Practicing Lean that benefits the Louise H. Batz Patient Safety Foundation, where Mark is a board member. Mark is also the VP of Improvement & Innovation Services for the technology company KaiNexus.

Posted in: Blog
Tags: ,

4 Comments on "The System Failed, 49 Killed"

Trackback | Comments RSS Feed

Inbound Links

  1. Many Causes, Which is "Root?" — Lean Blog | July 27, 2011
  1. Sam says:

    Aviation systems typically get put into place after enough people have died to justify the expense. That sounds pretty backward but if people aren’t dying, then the powers that be assume the existing systems are safe enough and any expenditure is a waste. Of course once 49 people are dead, suddenly it doesn’t seem like it would’ve been such a waste, eh?

    The ironic thing here is that any airport-based system that could’ve prevented this human error would be expensive enough that you’d find it at a major airport; however, many of the likely factors in this accident (poor lighting & markings, confusing layout, short runway, undermanned ATC tower) are unique to small airports that couldn’t affort such a system. On the other hand, more affordable solutions – more visible paint, markings, lighting – could be cheaply deployed at small airports and would’ve had a very good chance of preventing the pilots from making their fatal mistake.

    The airlines have become extremely safe in this country over the course of the last 20 years. This is the result of a two pronged approach: deploying technology when the advantages outweight the significant costs (GPWS and TCAS are examples) and focus of human factors training to help eliminate human error in other cases. Each effort has yielded great dividends at a cost that has allowed air travel to remain affordable.

  2. Lean på Dansk says:

    We all know the curve goes down before it goes up…most stockholders won’t tollerate the dip.

    The man is blamed because he no longer is around to commit the same error.

    The system still exists. If the system were blamed this would cause panic…or in the least lack of confidence in the system.

    Less sales vs. a continued risk of failure.

  3. Joe Wilson says:

    The thing that hit me watching this on the news on Sunday as I prepared to leave an airport similar to the one in Lexington was the inevitability of it all. Most of these systems are based on humans not making errors and human beings make errors. This means that not only are these failures likely, they are a certainty. No wonder they always blame it on an individual…it’s a lot easier than realizing that it was going to happen eventually and it’s going to happen again.

Post a Comment