Visualizing Metrics & Managing Improvement Properly: Webinar Registration Numbers

8
4

We've hosted a few really great webinars in the past few months at KaiNexus, including presentations by leaders from Rona Consulting Group, Catalysis, and Big Change Agency on themes of improvement, leadership and culture.

Since I manage these webinars, one of the metrics I monitor is the number of people who register for each webinar. That number might be a bit of a “vanity metric” since KaiNexus doesn't directly make money on these webinars (they are free). We're trying to provide content that's helpful for people and we also use the webinars as a way of introducing people to KaiNexus or strengthening our relationship with them. More meaningful metrics to the company are the number of leads and the number of new customers we sign up.

That said, I look at the “number of registrations” as a bit of a “process metric” that (we think) leads to some results at the end — in addition to sharing great content and being a part of the continuous improvement community (there are intangible and indirect long-term, less measurable benefits to doing these webinars).

Each time we do a webinar, I enter the number into our KaiNexus software (which we use to run and improve our own business, of course).

A table of numbers never provides much insight, whether it's in KaiNexus, Excel, or Minitab. It's really difficult to make sense of trends. It's very hard to distinguish between “signal” and “noise” in the data.

As I've blogged about before, simple run charts or, better yet, “process behavior charts” are valuable methods we can use to see more in our data, which then allows us to manage more effectively.

Before I show what that version of chart looks like, I want to compare it to the type of chart I see often in hospitals (and I presume they're used in other organizations).

This chart method isn't as good as process behavior charts for a number of reasons.

For one, it utilizes what Excel calls a “column chart” (commonly called a “bar chart”). I've blogged about this before. Column charts should be used for comparisons in a single point in time. Time series data are better represented by a “run chart” (or “line chart”) because they are easier to interpret visually (they are less misleading):

Line Charts vs. Column Charts for Metrics & Lean Daily Management

I'm not sure what this chart is called, but another problem is that it shows a CUMULATIVE measure throughout the year. In our case, we want webinar registrations to increase over time. Charting a cumulative metric (registrations so far this year) can be misleading in that it always creates a chart that looks good, going “up and to the right” as Eric Ries and the Lean Startup people say.

I saw a hospital that had charted medication errors on this chart. If we're trying to drive that number DOWN, I think a cumulative chart is much harder to interpret than just charting the number each month. Why make people do mental gymnastics when looking at a chart? Just plot the number.

This chart style also tries to compare to LAST YEAR'S performance by overlaying that, but doing so as a run chart. I guess this makes it easy to compare this July's performance to last July's, but that's a two-data point comparison. We see more and learn more when we just look at the trends by putting those 24 points into a single run chart. I think that's more helpful.

Charts like this often have a “target” line drawn on them.

So, this chart format (the one I do NOT like) would look like this:

We sometimes do more than one webinar per month, so instead of a monthly metric, I've basically charted the last 12 webinars and compared to the previous 12.

A chart like this shows we were ahead of the previous pace for a while, and then fell behind, and then caught up. The most recently 12 webinars had a slightly higher registration total than the previous 12, but not by much. And we're behind the “goal” number. That wasn't really a goal I had, but this charting methodology almost always has a goal. So we're also comparing performance to the goal.

Comparing our performance to the previous year is a 2-data point comparison, same as with comparing to a target. Making our status “red or green” doesn't do much to answer the question of “are we improving?”

There are two questions we should ask:

  1. Are we meeting our goal?  — Keeping in mind we should try to avoid the dysfunctions of arbitrary goals
  2. Are we improving?  — Or are we getting worse? Or is performance just fluctuating?

A process behavior chart (with a goal or target line added) can answer those questions much more clearly.

KaiNexus customers can create run charts or control charts in our software. The green line is the average and the red line is the “upper process behavior limit.” 

The chart more clearly shows that the webinars were less popular in early 2015 (an average of about 175) before it jumped up and has remained pretty stable since them. A process behavior chart can show more than 24 data points, which is the most you'd ever see in that annual / cumulative chart format. We can update process behavior charts to, for example, only show the 30 most recent data points. Too many organizations will start January 2018 with a lot of blank charts, losing the context of past data. That's a blog post I should write in January.

If our goal was 250 registrations per webinar, the run chart shows more clearly that we're probably going to hit that about half of the time, given the “common cause variation” that creates noise in the data. It's hard to know exactly why some webinars attract a bigger audience than others.

Causes of common cause variation could include:

  • The title and if it's appealing
  • The topic and if it's appealing
  • The breadth of the webinar (does it seem geared toward just a single industry?)
  • The day of the week (we generally do Tuesday, Wednesday or Thursday)
  • The time of day (we generally do 1 pm ET, but it sometimes varies)
  • The profile and fame of the presenter
  • Is it an outsider presenter or somebody from the KaiNexus team?

Edit: As I discuss in the comments… “common cause variation” is always there. The different causes of variation interact with each other. We don't know the exact effect of each common cause, but it's there. A signal in the data suggests there is a “special cause” that is new or unusual that is affecting performance. 

What the process behavior chart tells me is:

  • There's some natural level of variation that occurs – some webinars are just more popular than others.
  • The webinar aren't getting more popular… but they're not getting less popular either
  • We're sometimes hitting the goal and sometimes not

In a situation like this, reactive questions of “what happened last month?” aren't necessarily helpful. If we had a “signal” (a webinar with registrations greater than the upper limit or eight in a row above the average), then we should ask “what was different?” and “what could we repeat in the future?”

If we had, for example, Malcolm Gladwell do a webinar, we would probably get a enormous audience. But, the “special cause” of the “signal” in the data would be his fame. That wouldn't be a meaningful shift in our normal process or system.

Within the KaiNexus team, we ask more systematic questions about how to improve the system. How can we improve the AVERAGE performance of the system? How can we shift the mean upward?

Of course, this is all discussion about a vanity metric. We also have discussion about how to generate more leads and more customers.

What do you think of that combined column/line cumulative chart format? Do you see that being used in your organization? Does it help people improve?


What do you think? Please scroll down (or click) to post a comment. Or please share the post with your thoughts on LinkedIn – and follow me or connect with me there.

Did you like this post? Make sure you don't miss a post or podcast — Subscribe to get notified about posts via email daily or weekly.


Check out my latest book, The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation:

Get New Posts Sent To You

Select list(s):
Previous articlePodcast #294 – Clay Linkous, Studer Group Principles & #Lean
Next articleTime for a Holiday Break, Relaxation, and Reflection – and “Gemba Claus”
Mark Graban
Mark Graban is an internationally-recognized consultant, author, and professional speaker, and podcaster with experience in healthcare, manufacturing, and startups. Mark's new book is The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation. He is also the author of Measures of Success: React Less, Lead Better, Improve More, the Shingo Award-winning books Lean Hospitals and Healthcare Kaizen, and the anthology Practicing Lean. Mark is also a Senior Advisor to the technology company KaiNexus.

4 COMMENTS

  1. LinkedIn Comments:

    Alan Wikler:

    I have seen cumulative charts (both bar and line charts) one when the person creating it wants to give a false impression of improvement, and I invariably end up subtracting each cumulative data point from the last to get the actual trending data.

    I’m a bit confused Mark about why you bothered to list 7 possible causes of the common cause variation in your “vanity” data. It’s all noise, no signal anyway, right? so why bother. The only things that matter are interpreting special cause variation if it happens, and raising your average.

    My reply:

    There are many underlying causes of common cause variation. They all interact and are always there. There’s no single “special cause.”

  2. LinkedIn comment by Keith Dorricott:

    Another great post Mark Graban. I have certainly seen the cumulative chart used often – but the control chart less so. An advantage of the cumulative chart is it shows you whether you are likely to meet some annual target (despite common cause variation). I think the control chart is sometimes difficult to sell because it looks to managers that there is nothing they can do – the limits are so wide. I wonder if part of the problem is focus on a chart going “out of control” – when it is “in control”, you can work on trying to improve the mean or reduce the variability.

    My reply:

    An arbitrary target as the sum of arbitrary monthly targets. I guess that’s somewhat useful information in most organizations.

    But, having a target line each month and comparing that to actual performance can be done on a run chart or a process behavior chart.

    My biggest issue with the cumulative chart is that it makes it hard to answer the question of “are we improving?” I’d argue that’s a more important question than “are we hitting the target?”

    There’s a misunderstanding about control charts or process behavior charts — the idea that you ONLY react to special cause variation. Yes, you should investigate special causes, but when you have a system that’s nothing but common cause variation, the challenge is understanding and improving the system so that you improve the average performance and/or reduce variation (which reduces the limits that predict future performance).

    The managers are wrong in thinking there’s nothing you can do to improve a stable system with wide limits. There’s a lot you can do. But that involves systemic problem solving and improvement that’s not likely to be found when reacting to a single data point.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.