Interview with Paul O’Neill; The Right Goals in Healthcare
Many of you may know of Paul O'Neill for the dramatic employee safety improvements at aluminum maker Alcoa and you may know of him from his work in promoting healthcare improvement (he is the “bureaucrat” in “The Nun & the Bureaucrat” book about lean and systems thinking in healthcare). He also worked with Dr. Richard Shannon in the PRHI healthcare quality efforts (read my post from Monday about a separate interview with Dr. Shannon).
I didn't realize that, during his time as Treasury Secretary, the time required to close the nation's financial books was reduced from 5 MONTHS to 3 DAYS. You'll learn that and more in this article: “In a Perfect World.”
Part of O'Neill's method is setting goals at the “theoretical limit,” for example, the goal that nobody should ever get hurt working at Alcoa. I've worked in many organizations were employee or patient safety goals were set as something greater than zero. They have a goal to hurt people?? Well, not exactly — they've set a goal that's “realistic” based on past performance (normally a small percentage improvement).
These goals force you to radically re-think how things are done. You might wonder how I reconcile this (saying you need patient safety goals of zero), while I also write about how arbitrary targets are dysfunctional. A goal of zero, one that you can't hit this year would be demoralizing, some would say. But I don't think it has to be demoralizing if you have the right culture.
A reader asked me, via email, recently what the difference was between traditional “Management by Objectives” (MBO) and lean goal setting. I think the main difference is how management reacts when you don't hit the goal. The traditional dysfunctional MBO setting says “hit the goal or else” (or else you get punished or don't hit your bonus). A lean leader sees you haven't hit the goal and works with you to identify causes for the gap – it's mentoring and real leadership, not just punishment.
I agree with O'Neill that the only morally acceptable “goals” are zero. Now, we might have a statistically stable and “in control” system that predicts that, based on past performance, we can expect to have 2 employee injuries per month. That doesn't mean we're satisfied with a stable process that's harming people, we work to improve it. And that goal doesn't mean we fire leaders when the process is improved to where there's only 1 injury per month. We keep working towards zero — maybe it's better to call that a “vision” than a target.
In healthcare settings, O'Neill thinks the same setting can make a huge difference – leading to cost reductions of 50%. What's statistically expected doesn't have to be accepted, as O'Neill says:
At the microlevel, at Allegheny General Hospital and other places we've worked with, we've demonstrated that it's possible to do this and, in effect, to break the conceit that it's a God-given fact that 2% of the people who go through intensive care units are going to get an infection.
O'Neill is also right when he talks about weaknesses in benchmarking within an industry where everyone has what I would describe as “roughly the same about of waste and dysfunction” (my words not his). O'Neill says:
“The convention, for example, in health and medical care is to have measures across the country and measures for individual institutions to find out how they compare to the national averages. It's very routine to find institutions that say, “We're better than the national average, and it's not possible to be better than we are.” So the establishment of the idea of national norms is the enemy of continuous improvement.”
Back to my earlier point about goals that seem unattainable, O'Neill says:
It's particularly a phenomenon in the United States, I think, that people have a mind-set of, “We don't want to set goals that we're not sure we can attain.” So if you set goals that are referenced against some national level of performance, in effect you've set yourself a barrier that's not too difficult to get over, and then can declare yourself superior.
Maybe hospitals can agree – let's rely less on benchmarking and focus more on reaching the theoretical limit and reaching perfectly “ideal care.” We might not get there tomorrow, but keep working at it – you'll get better.
It's a long, interesting interview. O'Neill also touches on core issues of quality data transparency, why healthcare organizations are often afraid to share data and lessons learned — and he has a really radical idea (something I haven't thought through) of getting rid of medical malpractice.
That's very radical – interesting to think about. O'Neill says (as I'd believe) that people in healthcare very very rarely harm a patient intentionally (it's problems with the system, not bad intent).
When someone is injured, we're going to create an expectation that the injury or the length of hospital stay will be recorded in cyberspace within 24 hours so we can do a root cause analysis, and everyone in the world can learn from it in a short cycle of time. And in exchange for that, we're going to have an economic arbitration process so that people who are inadvertently injured will be compensated to the extent of their economic loss, and we'll pay for it out of general revenues of the federal government, because that's the broadest base for tax support.
In exchange for that, we expect the people in the delivery system to report without fail, at a huge penalty if they fail to report, with an expectation that the professional societies will take a much more aggressive role than they typically have in disciplining and withdrawing privileges from people who have repeatedly failed to deliver the expected level of performance.
Would that system lead to better quality than our current system? I doubt we'll ever get to do more than a thought exercise, no real opportunity for PDCA to see if O'Neill's proposal would work or not.