Of course patient satisfaction is important for hospitals to focus on. This is often a vague and uncertain thing to measure, but as Dr. Deming said:
“The most important figures that one needs for management are unknown or unknowable (Lloyd S. Nelson, director of statistical methods for the Nashua corporation), but successful management must nevertheless take account of them.
When hospitals do attempt to put a number to patient satisfaction, there are some things that can be done to make those numbers more meaningful to patients and staff members alike.
I was recently in a hospital that posted a number of performance measures, a balanced scorecard of sorts. One measure was the percentage of surveyed patients who said they would recommend the hospital to others.
That section of the board said, basically:
Actual = 84.1%
Target = 86.5%
The yellow indicates that the actual percentage was lower than the target.
So what does this mean? For one, if the target had been set at 83.9%, things would have been green. So what does that mean to patients or staff?
Regardless of the target and however it was set, the actual is what it is.
As a patient, how do we know if 84.1% is good or not? How does it compare to other hospitals? Do the staff members even know if this is “good” or not?
Many hospitals set a goal of being in the 90th percentile of certain measures, including patient satisfaction. Why not aim for perfection? How do leaders decide that it's OK for 13.5% of patients to NOT recommend the hospital? It seems like a completely arbitrary and meaningless number, that goal.
Thinking back to my days in manufacturing, companies often confused “specification limits” with the “voice of the process.” Spec limits might be set based on real customer requirements or they might be completely arbitrary. A key lesson of Dr. Deming and Dr. Donald Wheeler is that we need to, as leaders, react to the voice of the process and we need to, as Dr. Wheeler points out so eloquently in his book Understanding Variation: The Key to Managing Chaos, we need to look at time series data or an SPC chart (statistical process control).
One of the flaws in posting a single number (84.1%) and comparing that to the goal (86.5%) is that we have no context of trends over time.
Some organizations might compare this most recent period to last quarter or last year. Again, Wheeler points out that a comparison of two data points isn't statistically significant, as two data points don't make a trend. If the number is higher (or lower) than last year, we don't necessarily know what action to take. Organizations often over-react to every up and down in a performance measure, even if that up or down is noise in the system.
(story continues below ad)
Run Charts Are Better Than Single Data Points
If a hospital presented the “would you recommend?” data as a time series chart, patients and staff could tell much more. For example, monthly surveys might look like this, in a time series chart:
As Wheeler points out, we can get far more information (and make better management decisions) based on a run chart and time series data – far better than that simple comparison to a target or a comparison to last year.
Let's look first at a simple run chart (with made up data). Is patient satisfaction improving? At first glance, we might say “yes, it's getting worse, there is a clear trend.” It used to be in the 90%+ range, now it's only 84%. It's fallen three months in a row. Time to take action! Send out a memo, meet with the leadership team – kick some butt and make things better.
Some analysts might add a linear trend line to the chart – it's statistical and it's easy to do with a few clicks in Excel. That linear trend is clearly downward – things must be getting worse! Or are they? Call a committee or task force together, publish an exhortation to improve in the employee newsletter. Give an inspirational speech at an all hands meeting. Take action!
The punchline behind the data I've presented above is that it's randomly generated, a normal distribution around average of about 89%.
As Deming and Wheeler point out, we have to, as managers, learn how to separate “signal” from “noise.” The mean and the data points are the “voice of the process.” This data shows what our current process and system is capable of. If management sets a “target” of 95%, that's meaningless – the system is not capable of hitting that target on an ongoing basis. SPC teachers would tell us to not even put the target on the chart – don't confuse a “specification limit” with the voice of the process.
The target is just a target. If we could just boost performance by setting a target and giving incentives, everything in healthcare would be fixed already (and in business, for that matter).
If we have a stable patient care and service process, a stable hospital where the team and processes and environment are roughly the same each month, there's going to be common cause variation around this mean or average percentage of patients who would recommend.
Some months, we might have 94% patient satisfaction and some months we might have 84% patient satisfaction – and it doesn't mean anything is necessarily BETTER or WORSE in the system. Any of these fluctuations might just be noise – meaning there's nothing to react to. Dr. Deming warned against “tampering” with a stable system, meaning that if we react to every little up and down in the patient satisfaction scores, we are bound to make variation in that number WORSE than if we just left things alone.
I'm not saying “don't try to improve” – what I'm saying (as I learned from reading Deming and Wheeler) is that we can't (shouldn't) overreact and make false judgments based on the data.
We need to use SPC (statistical process control) thinking.
If we look at the data with a simple mean line drawn, it looks like this:
It becomes more visually clear that basically half the data points are above average and half are below average. This is completely predictable from a statistical standpoint – again, it's just noise.
At a conference last week, I saw a presenter show some data and he said, a bit exasperated, “The numbers keep going up and down.” The common cause variation kept the data moving between about 85% and 95% – just like my made up data set here.
Using the standard SPC rules, we CAN make certain determinations that things have gotten worse or better, statistically. If we have eight data points in a row ABOVE the old average, we can say that's not random chance. Something has changed in the system that has boosted patient satisfaction in a meaningful way. We call this a “special cause.” As leaders, we need to then go understand WHAT changed – how do we capture that learning as an organization? If we have eight data points in a row BELOW the mean, we can say things have gotten worse – again, why and what can we do about it?
Using SPC rules like this (see a complete list here) can help management avoid reacting to noise and avoid tampering. It allows us to focus our efforts to work on important issues, rather than just chasing normal variation.
Now let's look at an SPC chart that includes “+/- 3 sigma” upper and lower control limits:
The calculations of the control limits tell us that any single data month between about 80% and 98% percent is likely to be just noise, or common cause variation. A single month of 96% is bound to happen, by chance, statistically. We could throw a huge party, give bonuses to employees and, guess what, the next month things will tend to regress to the mean. Satisfaction might now be just 90%. Still above our “target” (which, to me, is pretty meaningless, this target) and it's still better than the mean. Things aren't getting worse – they are just fluctuating.
Now, there are currently four consecutive points below the mean – if there are four more consecutive data points below the mean, then there would be a statistically significant shift downward – a special cause.
If we have any one single point outside of these control limits, that's statistically significant – there's a special cause to go investigate in a good way or a negative way. Again, there are other more detailed SPC rules that you can reference, but understanding just a few basic rules and putting data in a simple run chart can go a long way toward avoiding some very common (and costly) management errors.
Don't want to miss a post or podcast? Subscribe to get notified about posts via email daily or weekly.