“Measures of Success” — The Need for a Measured Response to Our Performance Measures
We often celebrate the action-oriented leader, one who defines targets and basically drives results by pressuring people to perform better. These leaders might achieve only short-term results, giving the appearance of improvement when the improvement is not statistically significant.
Effective leaders establish metrics, ideally through a “catch ball” strategy deployment process. They collaboratively help set goals for their organization, and they sponsor projects and initiatives that are intended to move the needle on results, and evaluate progress over time. The book I'm currently writing shows leaders how to best answer the question, “Have we improved in a significant and sustainable way?”
Metrics are important in any modern organization. If your Lean Six Sigma Black Belt is reviewing their recently completed project, they wouldn't get away with saying, “I think quality has improved.” An internal startup team shouldn't say, “It seems like our customers prefer the new design over the last iteration. A chief nursing officer couldn't just say, “I feel like the number of patient falls is lower now.” They'd be expected to show data, and rightfully so.
Measures matter. The proper analysis of data and performance metrics allow us to separate good changes from bad, progress from stagnation. The methods in my
upcoming new book, Measures of Success (NOW AVAILABLE) help us determine if our performance is getting better, getting worse, or essentially remains unchanged. Having the right set of balanced scorecard metrics is important. But the role of leaders is important, too. How do leaders interpret measures? How do they respond to changes in metrics? How do they know if a change is worth reacting to?
Managers spend a great deal of time assessing and discussing performance – in the boardroom, when visiting sites and teams, and when reviewing projects and improvement initiatives. When presented with data, leaders at all levels feel compelled to react but not all reactions are helpful for the organization.
Many charts or dashboards present an overly simplistic comparison of numbers: current performance is compared to a target or goal. If current performance for a particular metric is better than its goal, it's labeled “green.” The leader might offer a compliment or they might ignore that metric to focus on those labeled “red” because their current performance number that is worse than their goal.
See one past blog post about this:
I think organizations would benefit from what we might call a more measured approach to managing performance metrics. “Measured” doesn't just mean putting numbers to performance. There's another meaning to explore here. Some synonyms for the word “measured” include:
We could call these our “three Cs of measured measures.” Or not.
For one, a measured response means a proportional, or considered, response. Not every “red” data point merits the same kind of reaction or the same level of attention. As we will learn in my book, it's more effective to learn to react (or not react) based on a more measured view of our performance: instead of just asking reactive questions like, “What went wrong last week?” when we have a red data point, we might, instead, ask questions such as, “How do we improve our typical performance over time?”
Secondly, a measured view of metrics also means a consistent evaluation of performance. The norm used to be for organizations to look at monthly or quarterly performance, using measures that often lagged reality, which makes it more difficult to improve. It was easy to make those simple “red bad, green good” evaluations with data that's presented at a slow frequency, but doing so didn't necessarily lead to real improvement.
In an age of “big data,” we might be drowning in numbers and information. As leaders, we might be under pressure to look at performance on a daily (or even hourly) basis. Does it really help to have more frequent “red bad, green good” evaluations? Or this just cause more wasted motion and more overreaction? Does this help us improve?
Using the methods in my book, we can make sure that we turn a flood of data into a controlled flow of knowledge and insight that allows us to better evaluate performance, focusing our efforts on improvement instead of knee-jerk reactions.
Thirdly, a measured approach to managing metrics means a cool approach that's careful, deliberate, and restrained instead of hot, angry, and overly reactive. Leaders waste a lot of time when they freak out about every downturn or any single data point that's worse than average. Taking a more measured approach doesn't mean that we sit back and passively accept any results. It means we take a more systematic approach to managing and improving the systems and processes that lead to our results.
One example is the story I told here:
I hope you're interested in the book and I hope that it proves to be helpful. You can learn more and sign up for email updates here.