Consulting Case Studies Need Statistical Validity
One of the books that's had the biggest impact on my work is Donald Wheeler's Understanding Variation: The Key to Managing Chaos. It's the best guide for applying simple and powerful “statistical process control” (or SPC) practices to management data and decision making.
A recently read a consulting case study, “Harris Methodist saves $648,695 through SIPOC process changes.”
That's a suspiciously precise number, but that's not my real beef.
The case study is a good one, highlighting how a Fort Worth TX hospital made improvements in the E.D. processes that led to shorter length-of-stay (improved patient flow and improved capacity).
The results claimed:
- The total Triage cycle time, from patient arrival to bed placement, was reduced by 23 minutes.
- The total ED-IP cycle time, from patient admit order to IP arrival, was reduced by 33 minutes.
- The average LOS decreased from 97 to 61 minutes
- The average patient satisfaction increased from 87.9 to 89%
One recent observation of mine is that consulting (or hospital) case studies should do more than report before/after. That's just two data points, and two data points don't make a trend, as they say.
A simple before/after comparison doesn't have a time scale. It begs the question of sustainability — kaizen events are notorious for having a quick burst of excitement and improvement… but then what happens? “How do we sustain improvements?” is one of the most common questions in the Lean world. Virginia Mason once reported that they had backsliding in 60% of their week-long Rapid Process Improvement Workshops. That's not a good sustainment rate.
Updated: Reference on the Virginia Mason number comes from this article that was on the internet – Seeking Perfection in Health Care: Applying the Toyota Production System to Medicine – (to be fair, it's citing 2004 numbers, they have undoubtedly gotten better). They said:
During an assessment in late 2004 that reviewed and remeasured all improvement efforts to date, we were only holding the gains on about 40 percent of those changes, partially because it is easy to slip back into old ways of doing things if there is a lack of accountability and follow-through.
Back to the main story:
One way that case study writers can show sustainment is to show a time series chart over time or, better yet, a control chart (as Dr. Wheeler demonstrates in his book).
In the case I've linked to, the final statement is:
The average patient satisfaction increased from 87.9 to 89%
Is that at all sustained or statistically significant? We don't know with just two data points.
Thankfully, in the linked case study, they give us a chart, a time series chart, as shown below, so cheers to them for not only giving the before and after.
Those of you that are familiar with control charts know that the change pictured above isn't really a statistical improvement. Ironically, they might have shown us the chart to try to bolster their case?
If Harris Methodist had a “stable system” before the change (which took place in November 2008??), then the next 12 data points show what appears to still be a stable system around the mean of 86.8%.
One of the SPC rules (the “Western Electric Rules“) to show a statistical shift is to have EIGHT consecutive data points above or below a mean. In the above chart, we only have FOUR. It's not statistically significant, it's not a process shift.
It's actually more true, statistically, to say there was no improvement. The last four data points could be “common cause” variation — a.k.a., noise or statistical chance. They may have declared victory prematurely, as the next month's data would just as likely be 85% as it would be 88% – meaning the process hasn't necessarily gotten better.
It would be nice if we could have some agreement and standards about how to represent data like this in case studies, but that's not likely to happen.
I'm not saying the hospital or the consultant didn't make things better. I'm just saying that the above chart doesn't, on its own, prove that case.
Some of this might seem sort of esoteric if you haven't read Wheeler's book. Go get the book, or you can read articles on his website. I tried to cover this topic a bit in my book, Lean Hospitals: Improving Quality, Patient Safety, and Employee Engagement, as well.
Have you been able to apply SPC principles to your own management work? Have you been able to use SPC to help gauge if you really had a statistically significant process shift? If this doesn't make sense, ask questions and I'll do my best to respond in comments.
In the near future, I'll share some data from a former client of mine showing three years of sustainment (actually a few positive process shifts, improvements that are statistically significant).