How to Cut Through Workplace Chaos: Nelson Repenning on Lean, Flow & Dynamic Work Design

370
0

Scroll down for how to subscribe, transcript, and more


My guest for Episode #538 of the Lean Blog Interviews Podcast is Nelson Repenning, Faculty Director of the MIT Leadership Center and co-creator of Dynamic Work Design. Nelson describes himself as an “organizational engineer,” helping leaders redesign the routines and decisions that determine how work really gets done. He joins host Mark Graban to discuss his new book, There's Got to Be a Better Way: How to Deliver Results and Eliminate the Chaos of Modern Work, co-authored with Donald Kieffer.

In this conversation, Nelson shares insights drawn from his decades of experience studying system dynamics, Lean thinking, and organizational learning. He explains how leaders often fall into the “capability trap” — spending their days firefighting immediate issues instead of improving the underlying system. From the arms race of hospital alarms to the collapse of fast-growing companies, he connects examples from healthcare, manufacturing, and technology to show why even good intentions can create destructive feedback loops if we don't understand the system.

Mark and Nelson also explore how Dynamic Work Design translates Lean principles like flow, visualization, and problem-solving into knowledge work. They discuss the five core principles — including “Structure for Discovery” and “Connect the Human Chain” — that help organizations make work visible, surface problems early, and evolve systems continuously. Listeners will learn how to move from firefighting to focus, and from chaos to sustainable improvement.

Questions, Notes, and Highlights:

  • How did you first get involved in the field of system dynamics at MIT?
  • For those unfamiliar, what exactly is system dynamics — and how does it apply to management and organizations?
  • Why hasn't system dynamics had the impact on practice that it deserves?
  • What lessons can we learn from the classic examples you've taught, like the Mississippi River levee arms race or the “People Express” airline simulation?
  • How do those feedback loops and unintended consequences show up in today's industries, like healthcare or tech?
  • What led you and Donald Kieffer to write There's Got to Be a Better Way? What core problems were you trying to address?
  • Can you explain the “capability trap” and how firefighting keeps organizations from improving?
  • Why is it so hard for people to commit to prevention and long-term improvement when firefighting feels more rewarding?
  • How does Dynamic Work Design help leaders “structure for discovery” and surface problems earlier?
  • What role does psychological safety play in making it safe to raise problems?
  • How do you define “Dynamic Work Design,” and what makes it different from traditional management systems?
  • Why is it important for leaders to “go see the work” firsthand?
  • Can you walk us through the five principles of Dynamic Work Design — and how they connect to Lean?
  • What does “Connect the Human Chain” mean, and why do so many organizations get communication wrong?
  • Can you share an example where these principles led to measurable improvement — such as the hospital case you mentioned?
  • What can leaders learn from Toyota and other high-reliability organizations about making improvement continuous rather than episodic?
  • How do leaders shift from reactive, one-off change programs to daily, ongoing learning?
  • What message do you hope managers take away from There's Got to Be a Better Way?

This podcast is part of the #LeanCommunicators network



Full Video of the Episode:


Thanks for listening or watching!

This podcast is part of the Lean Communicators network — check it out!


Automated Transcript (Not Guaranteed to be Defect Free

Mark Graban: Hi, welcome to the podcast. I'm Mark Graban. Our guest today is Nelson Repenning. He is faculty director of the MIT Leadership Center and co-creator of the Dynamic work design approach that we're going to be talking about today. Nelson describes himself as an organizational engineer and he helps leaders redesign daily routines and decisions that shape how work really gets done so that innovation and transformation really stick. He has decades of experience working across industries including biotech, oil and gas. Teaching at MIT Sloan School, he's shown so many people how to connect strategy to action and to make knowledge work as intentional as manufacturing. So, his new book, I have a copy here that I'll hold up for video, is called There's Gotta Be a Better Way. Co-authored with Donald Kiefer, it lays out five principles to cut through the chaos of modern work and restore focus, flow, and sustainable productivity. So a lot to talk about today, lean and Toyota and otherwise, but Nelson, thanks for joining us here today. How are you?

Nelson Repenning: My pleasure, Mark. Great. Thanks for having me.

Mark Graban: And I hope I got your co-author's name correct, Kiefer, actually. I made doubly sure on your name, even though I thought I've known it after taking your class on system dynamics over 25 years ago. But I'm happy to reconnect with you and maybe we'll get Donald Kiefer here on the podcast sometime. Before we talk about the book, I think there's a great opportunity here. I normally ask people their lean origin story, but I'd like for you to give us an overview of system dynamics for people who aren't familiar with that field and that discipline.

Nelson Repenning: Yeah, happy to. So, I'm an MIT lifer. I've been here since my 23rd birthday, which I think I'm closing in on 35 years if you count my time as a PhD student. And to get my origin story, I probably now–origin myth might be more accurate at this point–when I was finishing undergrad, I stayed at my undergrad institution for a year as a glorified teaching assistant. And it was that year that really sealed the deal for me. I just enjoyed working with faculty and the lifestyle seemed appealing. And for most of that year, I thought I was going to be an economist. But I had one faculty member, a really important mentor Mark Page, who was a graduate of the system dynamics group at MIT. As I started to get to know him, the work he was doing was just so interesting and I didn't know what a feedback loop was or anything, but I have a little bit of a contrarian or pioneer spirit, I guess, and it just seemed new and much more frontier than what was happening in economics. And honestly, in retrospect, I'm not exactly sure I knew what I was getting into, but I came to MIT and eventually joined the PhD program.

And really what system dynamics is, its origin is in the feedback and control tools that engineers use in all the systems that we work with every day, whether it's our cruise controls or robots in factories or Nest thermostats or really anything like that. And the founder of the field, who's an MIT legend, Jay Forrester, had done pioneering work in those physical control systems and in the late 1950s had this insight that you could take the same ideas and tools and metaphors and apply them to the social world. And so that really was what I walked into and it was just mind-blowing to me in terms of the power and a new way of thinking about it. And so I've been in that field ever since. I've been a faculty member in the system dynamics group for almost 30 years. Had many great students, yourself included, in my classes.

And to maybe segue to the book, I think one of the real challenges in the field that we'll probably talk about a little bit is, I think the one thing we can say with a lot of conviction is the person who builds the model, who does the feedback analysis, learns a lot. Transferring that into the heads of people who might be the ones making the decisions turns out to be a really hard problem. And so the field has been perennially frustrated with, we have all these insights and we can see the world differently, and yet it doesn't often change practice. And so that has motivated a lot of the work I've done, particularly in the last 10 or 15 years, about how to repackage it so that we can actually have some impact.

Mark Graban: And has system dynamics as a department or a discipline spread to many other universities? It's so associated with MIT, especially the quantitative modeling that we did in class. But I think also even just the more qualitative business lessons learned from other businesses where the system dynamics weren't well understood. How much has that spread? Or is that a challenge?

Nelson Repenning: I think it has spread and I think it's still a challenge. I think there are probably a dozen schools worldwide that have departments. Often our PhD students will join another more traditional disciplinary department and have system dynamics in their toolkit. So we have several of our graduates in operations management departments or strategy departments. And I think other fields have, I wouldn't say co-opted, but have incorporated some of those styles of models and thinking. But that said, I don't think it's necessarily had the impact that I would hope for, and it definitely hasn't had the impact on practice that I would hope for. So there's definitely still work to be done in terms of getting it as a regular part of the managerial toolkit.

Mark Graban: And I'm reflecting a little bit and thinking about even some of the qualitative lessons learned, whether we were drawing system dynamic diagrams with the loops and flows and reinforcing loops and balancing loops. I think back to some of the qualitative lessons around not relying on overly simple countermeasures to complex problems and how those can backfire. And I think I remember this from the reading or from your lecture or both, of flooding along the Mississippi River and one community builds a levee and that just forces the problem on someone else. And as it's downstream, the flooding actually gets worse. And you have this battle of who has the slightly taller levee on one side of the river. From your reaction, it sounds like I'm remembering that at least vaguely.

Nelson Repenning: Your memory is excellent. That was an example I opened class with for many years. And it's just as a side note for your listeners, it's from this fabulous book called The Control of Nature by John McPhee. And there are three or four examples in the book about how humans over the years have tried to control nature. Nature tends to fight back even stronger. But what happened in the early days of trying to control flooding in the Mississippi River is people would build levees to protect their farms. But of course, the problem was in order to be safe, your levee just had to be a little bit higher than the one across the river. And so it created this kind of arms race, which we would call a positive feedback loop, where I need to build mine a little bit higher, but you build yours a little bit higher. I think the levee system is one of the few human-made objects you can see from space. It's become so vast. And still, to this day, the Mississippi River fights back in sometimes pretty tragic ways.

And we see these dynamics all the time. I have a PhD student right now that's studying how doctors use electronic medical record systems in hospitals, and she's documenting this fascinating arms race between the software developers who are trying to adjust the behavior of the physicians and the physicians who are trying to work around the software so they can get their work done. And what you get is this increasingly intricate software assistant with dropdown menus, and you can't go forward until you click this box, and then the doctor is pushing the buttons without looking at them. And it's really kind of scary actually. And I think you're right. One of the benefits of system dynamics is once you kind of see that sort of arms race archetype, you can spot it in lots of different places in a way that maybe you wouldn't have seen it before if you hadn't been introduced to it.

Mark Graban: Yeah. So, I mean, there's maybe, again, thinking along the lines of simplistic countermeasures, there's risk of medication errors or prescription errors. And the simplistic countermeasure might be, well, we need to add more alarms. And then the pushback is we just mindlessly click through the alarms because I get a thousand of these alarms that are BS until they click through the one that they should have noticed. That sounds like what you're describing.

Nelson Repenning: Exactly right. And I've seen this in the oil and gas industry many times. Right. Big complicated production units are heavily instrumented and they alarm for lots of things, but a human operator can only absorb so many things. And if the alarm goes off too often, you're exactly right. People get completely habituated to it. Oh, that's no big deal. And then until something really bad happens, you're exactly right.

Mark Graban: And one of the other lessons that comes back, and this is from playing one of the management simulators around an airline. And I think one of the key lessons there, and anyone who's been involved in entrepreneurship over time recognizes the growth problem of, hooray, we're growing, but how growing too quickly can cause the collapse of a business. Could you talk about that simulator or the simulator examples you've seen?

Nelson Repenning: Yeah, so the simulator was about an airline that was called People's Express, which only people who are probably our age or a little older will even remember. But it was in the early eighties, I think at the time, the fastest growing publicly traded company in history, and they revolutionized the airline industry. And it was just a total sensation. But the lesson, which grew out of a famous model that Jay Forrester had done many years earlier called the Market Growth Model, was something is going to limit your growth, right? Nothing can grow infinitely. And once you accept that, then you as a manager have some latitude in choosing what things are going to limit your growth. And what had happened at People Express, part of their value proposition was these super low prices and they didn't want to depart from that. But what happened is they had more demand than they could satisfy, right? They could only bring on so many airplanes and pilots and flight crews. And so basically what limited their growth over time was their crappy service. Whereas, you can only run the counterfactual as a simulation, but it certainly seems like from all the analyses that people have done, that had they just raised their prices a little bit, they would've been able to control that growth a little better and have more resources to plow into more planes and training and so on and so forth. And this cycle repeats itself over and over again. I think the latest episode of this might be–it's a slightly different dynamic–but I think everyone's familiar with Peloton. You know, these indoor bikes, and they were so popular, and then it was a mixed blessing. COVID came along and it was, at least among wealthier suburbanites, it was the Christmas gift of choice for 2020. So their demand just goes through the roof. Hard to ship, delays, you would attract competitors because other people want to come in. And it also turns out that once you have one, you don't really need another one. And so their stock price soars and then plummets. And I think they've been struggling ever since to get back on the right side of that. And I know they're selling services and so on and so forth. But it's a mixed blessing when your product is really, really hot, particularly if it's a consumer durable.

Mark Graban: It seems to show how much more difficult it is to scale physical products instead of just that monthly subscription service model. But my understanding of it was they, I think, extrapolated the growth rate from 2019 to 2020 and assumed that this wasn't just a one-time beer game type jump in demand. Think of another classic system dynamics thing that people might have been exposed to. And they built a giant factory in Ohio to build more Pelotons, and they ended up not needing that capacity at all. It's not that they stopped selling Pelotons, but the growth just–it was bad forecasting, I guess, or some faulty assumptions about continued growth.

Nelson Repenning: I think that's exactly right. And I think if you don't understand the dynamics of market diffusion and saturation, my guess is they did approximately, they took a ruler out and said, well, it's going up this much and it's going to keep going. And no one really sat down to do the basic kind of mass balance of, how big is this market really going to be and what fraction of this have we already captured? And can we really count on repeat business? And then also there was the math of the revenue you get from selling the hardware versus the subscription fees and so on and so forth. So, yeah, I think you're exactly right. It's a, once you see it, it's a pretty basic dynamic. It's kind of product diffusion, but I think it's very easy in the moment to get sucked into, wow, it's growing 20% a year, that's going to keep going and why won't that be great for our stock price?

Mark Graban: So what is the current state of the management simulators? Does People Express still get used in teaching? Are there newer ones that are fancier, more updated simulators? Or is it both?

Nelson Repenning: Both. So People Express still gets used. It's now a 40-plus-year-old Harvard case, I'm guessing. We are the–we buy a couple hundred a year. We're probably the last consumers. But there are many other airlines that have been through it. But since then, I think there have been several new simulators. My colleague John Sterman has worked on a lot. I think the one that is probably most pertinent is he and his team have developed a really fascinating and I think, kind of scary simulator on climate dynamics and the choices that we may need to make as a planet in terms of mitigating those. And having played it a few times myself, it's a little bit depressing how hard it is to get the temperature increase back to one or two centigrade. And there's a bunch of others that capture some of these arms race dynamics that you and I were talking about earlier. But I think the interesting question that remains is there's no question that people learn from these things, and I think they enjoy the experience. But how effective are they in getting people to go back and change their behavior? And that remains an open sort of research question that I think a lot of people in the field are thinking very hard about.

Mark Graban: Because it's challenging enough, it seems, to take the relatively simple dynamics of the beer game and apply that to supply chain planning of reducing delays, improving communication and visibility as opposed to the complex dynamics of global climate or some of the older modeling of population growth and food and poverty. And those are really complex systems. It seems like if we struggle to learn–you play the beer game and people might say, yeah, but our business is different. That was fun. But what are we really going to do differently? Is that part of the challenge?

Nelson Repenning: I think so. Just to take a real example, right behind my office at MIT is essentially the Silicon Valley of biotech. Many of the major companies are there and real estate in Kendall Square, just for the last five or 10 years, the prices had been going through the roof, very valuable, tons of construction and so on and so forth. And it's exactly like Peloton, right? People are just forecasting these continued growth rates and then suddenly we get a little excess capacity and the economy slows down and a couple macroeconomic shocks, and suddenly rent rates are plummeting and buildings are being canceled and so on and so forth. And if you've lived there long enough, you've seen this cycle go several times. And so it's one of those where I think you would've thought we would've figured this out by now, but it doesn't seem like we have.

Mark Graban: So let's dive into the book. And again, it's There's Got to Be a Better Way: How to Deliver Results and Get Rid of the Stuff That Gets in the Way of Real Work. Maybe first thinking about the problem statements that drove you and Donald to write the book, the problem statements that might resonate with somebody. Who should read this book? What are some of the dysfunctions and problems that you're addressing?

Nelson Repenning: So I think it's probably easiest to go historically for me. So the problem that I've worked on research-wise for most of my career really boils down to why don't people use the tools and processes that we would all agree are good things to do? And I imagine it's probably near and dear to both our hearts. When I grew up as a young PhD student, it was the era when the quality tools and lean-related tools were first getting popular. The Machine That Changed the World had just come out and it was a pretty hot topic. And it was also the time where the Japanese manufacturers were really dominating the space. And so lots of people in operations management and the various quant fields were either inventing new tools to try to sort of improve on what was happening or figure out why these things were so successful compared to what was then the more traditional inventory management and scheduling methods. But one of the things that I was really interested in at the time was the US manufacturers were really struggling to use these tools. I mean, here we have pretty compelling documentation that there was a better way, not that I had the title back then, to run a factory and to develop new products. And at that time the gap in quality between Asian manufacturers, it was just stunning. And so what was up with that?

And I think the punchline of what came out of that early research is that most of the organizations, at least we were studying, were stuck in this dynamic that I think metaphorically was called firefighting, and which in the scholarly papers, I called it the capability trap, which basically boiled it down to there were so many problems to fix and there was so much going on that they spent all their day with duct tape and safety pins and band-aids trying to fix the problems they already had. And they were really having a hard time making a commitment to doing the preventive activities that would have prevented those problems in the first place, whether that was investing in basic quality management tools or learning some of the lean tools or some of the associated basics. And so I documented that both in the manufacturing world, but it also was quite acute in the product development world, which again was kind of a hot topic then. And so the story that emerged from the modeling and the kind of pieces of work is basically what it boiled down to is there was a mismatch between how these systems operated and how managers tended to learn. And that was probably best captured in a paper that John Sterman and I wrote the title of which was, Nobody Ever Gets Credit for a Defect That Never Happened.

The basic problem was that if you did all the stuff right that you read in the textbooks, you ran a really smooth factory that didn't have a lot of defects and there wasn't a lot of chaos and firefighting, and that is of course what you want, but it didn't give us the kind of visceral sense of progress that I think the human brain tends to really crave. And then just as an interesting coda to that, the early work I had done was in mostly manufacturing organizations, and that's where I met Don, my co-author, at Harley-Davidson. But a few years later, I started doing work in the oil and gas industry and particularly did a couple really high-profile accident investigations. And what was so shocking about that is that the problem that I had seen in factories and R&D, multiply that by 10 at least in terms of what was happening in the high-hazard world, in the sense that what is required to run an oil refinery safely is well known. There are books about it. There are lots of experts. You can get degrees in it. I mean, this is just not rocket science. And yet every accident investigation or near miss that I worked on or everyone else worked on, what you would see is the root was typically just the failure to do the basics on a regular basis. And you see the same thing in healthcare.

And so then it was really a big question of, okay, nobody wants to blow up a refining unit. So what's going on here? And I think you could really see this problem of if you do everything right, nothing happens. And so when I teach this, I often say to people, if you run a perfect safety program, what happens? And the answer is nothing. That's what you want. But the human brain is not often good with systems where you take an action and the outcome is nothing, because again, it just doesn't give us that feedback from the environment that we crave. And so it really takes some careful work design and I think a complimentary culture to develop a commitment to doing things right, because it's just not something you're going to learn through kind of trial and error, rote learning.

Mark Graban: When you talk about capabilities, one thing I've run across in healthcare is not having the same shared definition of problem-solving. I'll go in and try to teach and coach people root cause analysis and testing countermeasures and things that are taught by Toyota and other companies. And I've had people kind of balk and say, “Well, no, no, we're great. We're already great at problem-solving. We're problem-solving all the time.” But they're solving the same problems over and over. They're maybe good at reacting. How much have you run across that? I hear a chuckle. That sounds familiar.

Nelson Repenning: No, it's my life in some sense. It sounds very familiar. And I think what's going on there is one way I often teach this, and I got this little gambit from Steve Spear, who you may know. I sometimes show a little clip from that very famous I Love Lucy episode, the Chocolate Factory, where the chocolates are coming too quick, and they're stuffing them in their shirts and their hats and eating the chocolates and so on and so forth. And I always ask the students, “Are Lucy and Ethel, the two characters, being creative?” And of course the answer is yes. They're inventing new methods on the fly. But the two things I think are really important are, number one, that creativity is largely sort of subterranean in the sense that they're basically making it look like they're following the process when they're not. And that creativity is actually probably making the problem worse rather than better. So I totally agree with you. And I think people are right. They are solving problems all the time, but they tend to be private solutions. Meaning that, again, I'm not doing it in full view and I'm making it look like I'm following the process when I'm not. And because it's not driven by the basic disciplines of structured problem-solving, very often it's a kind of quick fix at best, and in many cases makes the problem worse in the long run rather than better. It's like in software, I mean, you're literally putting patches on broken stuff as you go forward, and that all kind of comes back to bite you in the backside when you go forward. So I think what we all, I hope, learn from lean, and we've tried to capture and broaden in the book, is if you can take that problem-solving, which is hidden, and make it public, and then structure it a little bit, now you're starting to get rid of some of those root causes and build some of those capabilities. But getting people over that hump, as I'm sure you've experienced as many times as I have, is not a trivial task to say the least.

Mark Graban: And this is touching on, jumping ahead a little bit to recommendations in the book and dynamic work design as a framework, solving the right problem, not just focusing on symptoms, root causes. I think it's music to the ears of listeners here. And then what I think we're touching on here is structure for discovery, of how do we help people surface problems instead of hiding them. And that makes me think of the work of, down the road, Amy Edmondson at Harvard Business School and psychological safety. What are your recommendations or Don's lessons about helping people feel safe? Because from the beginning in the Lucy video, they're kind of bullied by the supervisor who is kind of threatening them and creating an environment where they didn't feel safe to say, “Hey, that line's going too fast.”

Nelson Repenning: I think it's an interesting question. I think this is a place where what's happened in our field is people have tackled it from one of two angles, and you probably really need to think about both. So I think people that came from more of the operations side have really worked on the work design part, which is very important. So we want to create systems that are self-diagnostic that will show us the problem. And it could be getting rid of inventory or clear signals or whatever. And then I think people on Amy's side have worked much more on the kind of cultural and psychological dynamics, which is to make sure that if I surface a problem, I don't get yelled at, I don't get fired. My own view of this, and this is very much with my system dynamics heritage, is that you need to do both. And I think those two things co-evolve.

If we go off and do an offsite and I tell you that we're going to have a psychologically safe environment, Mark, I promise I'm never going to yell at you. But I stick you back in a poorly designed work system where you can't see the defects and you're always behind. This is a little bit of an overwrought metaphor, but it's akin to going to rehab for two weeks and then I send you back to the same friends and contacts that you had before. You're probably going to get sucked back into it. So I think you need to do both.

And then I think the other thing on the design side is–well, there are two parts to that. I think number one is in the structure for discovery principle, it helps a lot when you can create the work or design the work in ways that people can see the problems when they're small. It is much easier for people to raise their hand when it's a minor deviation or it's a problem that's been going on for an hour or two rather than something that's been going on for a couple of weeks. So that's just the basic psychology.

And then the other thing, which is I think we make a huge deal about in the book, and it's a lesson that comes directly from our friends at Toyota is, senior leaders need to go see the work. And they need to really understand what the folks doing the work actually have to deal with every day. Because the moment that they go down there and they see, “Oh, this is actually what's required to make a car, treat a patient, or sequence a genome,” whatever the case might be, they will be much more sympathetic and much less likely to blame them for those workarounds because now they have a good sense of how challenging it is to actually get the work done. And I think this kind of mindset that a good manager can manage anything, and I don't really need to go to the factory as long as I get the numbers in my spreadsheet correct, it's just the root of so many problems that we see in organizations. And I think in part it just allows us to blame the folks on the shop floor for the work because we have no idea what they're dealing with. So I think those pieces need to really come together, and we have had the best luck when leaders proceed incrementally in terms of going to see the work and starting to fix the problems themselves. And then once they get the hang of it, spreading it, rather than a kind of big, one-size-fits-all change initiative.

Mark Graban: So we're starting to dig into the five principles, but maybe take a step back and how would you define dynamic work design in a nutshell? And what makes it dynamic? Is it the idea that we know we won't design a perfect system and therefore it needs to evolve, or what else?

Nelson Repenning: I think there's two things going on there, and I would preface this by saying that I don't know if other people experience this, but in my career, in the work I've done, I usually have most of the puzzle pieces before I can see how the puzzle fits together. And so Don and I did many, many projects and we're kind of trying to figure out how to solve this firefighting capability trap problem. And we had tools and processes and principles, but the puzzle really finally all snapped together for me when I finally realized that I think the root of a lot of this is that so many of the traditional management tools and processes that we have are based on–and sorry for the jargon–a sort of static model of the world, which basically means that we're implicitly presuming that we can predict the future with some degree of accuracy, and we can predict our impact on that future with some degree of accuracy.

And you can see this in so many different things once you start looking for it. So one of the little teaching gambits I often use when I work with executives is to ask them how long budget season is. So from the day the budget goes into effect, how many months earlier do you start negotiating and planning the budget? And the typical answer for a midsize or large organization is six months. And then the second question I ask them is, okay, once the budget goes into effect, how long is it that the assumptions in it are accurate enough that it's a useful guide for action? And the answer there is typically three months. And so they immediately see the problem, which is we're spending twice as much creating this thing as using it. And then we spend the rest of the year basically making it look like we're following the budget strategy, when in fact the work has changed. And I think that mismatch is the entry point to the downward spiral into firefighting. Because now suddenly I can't really tell the truth about what's going on because I'm supposed to be following this budget, or I'm supposed to be meeting these targets even though the world has changed. And so smart, capable people get out their toolbox of duct tape and safety pins and they start solving problems privately. And I think that really is what gets the downward spiral going.

So, the first part of dynamic work design is if you admit the fact that we cannot forecast the future perfectly, nor our impact on it, and that we are going to get it wrong, I think you will design the work differently. And then I think a really important corollary is exactly what you said at the outset of the question, which is not only that, but we're not going to design this technology perfectly either. And so if you accept that fact, whether it's a production line or R&D or whatever, you will also design it differently because you know you're going to have to make adjustments as you go, and we want those adjustments to be public rather than private.

I think the upshot of all of this is really what this all boils down to is, I think what's happened for large organizations is we have really started to think about organizational change in kind of a wrongheaded way, in the sense that change initiatives are something that big companies do every 18 to 24 months. And they usually tend to be this extremely disruptive thing, often aided and abetted by large consulting companies. We're going to go from matrix to business or whatever, and then you do this major reorganization and I have no idea who I report to and my org chart now is like a spaghetti diagram. And then just about the time I figure out who I report to, we're going to do it again. And I think instead, what really good companies do and what we learn from Toyota and what hopefully dynamic work design helps people do more broadly is change is something you do every day. How quickly do we sense mismatches? How quickly do we solve problems, and how quickly do we readjust? And our experience has been, when you take that mindset with some simple design rules, the gains you can get in performance can be really dramatic.

Mark Graban: So some of the things you are mentioning there sound like maybe examples where an organization was not solving the right problem; the reorg may be just kind of rearranging deck chairs on the Titanic to use that old cliche. And the last manufacturing company I worked for, a little over 20 years ago, I had one business unit where I only worked there two years, but I had heard that they kept chewing up a different president of that business unit every year, kept firing and moving and replacing. And it had happened again. And I overheard my VP's admin talking probably to another admin on the phone in their network and she said something like, “I hope someday they can find somebody that can actually fix that business.” And maybe that business was just broken and dying and it didn't matter who was leading. What's the right problem?

Nelson Repenning: I think that's exactly right. And you referenced the beer game earlier, which is this simulation we've been running at MIT. It's a supply chain game. But one of the things I think is so interesting about it is if you look at the statistical analysis that my colleague John Sterman has done, it really does not matter who sits in those chairs. You could take eight professional supply chain managers, you could take eight high school kids, and they're going to do about the same. And the reason they're going to do about the same is the system turns out to be so poorly designed that it swamps any individual differences. And so I would not be surprised if that's exactly what was happening with that business you referenced, which is either they got a bad business model or the system was poorly designed. And this idea that we're going to wait for this one magical mutant that can run this poorly designed thing and make it successful is just a fool's errand. But I think it's a very common strategy and it's easy for all of us to get sucked into that mindset. And you see it all the time in different contexts. We have a sports team that doesn't play well, well we'll fire the coach or whatever. It happens a lot, but I think people are missing the much bigger picture typically.

Mark Graban: And you mentioned the decades-long-ago quality movement that still exists in some ways or remnants of it. Going back to W. Edwards Deming who, and somehow in my mind, I picture that him and Jay Forrester were friends. They were of about the same era and may have overlapped, even though Dr. Deming didn't teach at MIT, I'm sure he was there at times. But the idea of how much performance is driven by the system as opposed to the people working in said system. Lucy and Ethel could have been two characters from some other sitcom struggling in that same system, or those leaders at that company. Or again, in healthcare, unfortunately, another example where a poorly designed system or the work system hasn't even been designed, except in the loosest of ways and that it just happened, it just evolved into what people are doing. And individuals get repeatedly blamed, punished, fired in some cases, prosecuted and convicted and jailed for things that seem to be–these medication errors are bound to happen in the system. And there's more effort sometimes into–or people say, “Well, there's gotta be a better way,” but there isn't because these things keep happening and therefore it's just a fact of life. But there's that bad assumption or the simple countermeasure of punishing the people in that system, not preventing the next medical error. Sorry for climbing on that soapbox. It's frustrating.

Nelson Repenning: Well, and I think it's a very important soapbox because you're right, it happens all the time and there are psychological roots to this. There's a phenomenon in psychology that's known as the fundamental attribution error. Without dragging through all the details, it basically boils down to we are very prone to blaming the person who's closest to the problem, somewhat independent of their culpability. And this is the root of many unsavory features in society that extend far beyond the shop floor. But it happens in organizations all the time. And one of the reasons that I took this role that I have now as the head of our leadership center is I was raised in a scholarly tradition that I think tended to downplay the role of leaders in organizations, right? It was much more about the structure and the context and so on and so forth. But after a while, the contact I had with the real world made it crystal clear that actually leaders have a huge impact on their organizations. No big surprise.

And I think one way they do is just the role modeling they do when something goes wrong. Do you go on a witch hunt to find the person that screwed up, or do you take responsibility and take seriously the context of the impossible position that we put people in? One of the incidents that we talk about in the book, which is one I know reasonably well, is this explosion that BP had in their Texas City refinery. And if you dig just a little bit into the data, what you'll see is that the margin environment for oil refining in the 10 years prior to that incident had just been terrible. Oil refining turns out to be a very cyclical industry. You either make tons of money or you lose tons of money and you just got to ride the wave. The famine had just lasted longer than was typical for that cyclicality. And so of course, every new plant manager was given a prime directive to cut costs. Keep the doors open. We know things will turn around, we just don't know when. So hold the line until it happens.

You're basically asking those people at some point to make a bet on their job decision to keep it safe. When do I have to go to the boss and say, “I'm not cutting costs anymore,” because now we have gotten through all the fat and we are at the muscle and the bone. I'm sure that person had a mortgage and kids and a spouse and so on and so forth. That is just not a system that is going to yield the right outcomes. Will we get the occasional person that's willing to say, “No, I'm not doing it”? Of course. But I feel like that's the minority rather than the majority. And you run that experiment enough, you're going to get someone that, even though they're well-intended, is going to side the wrong way. And that's when something bad's going to happen, which is exactly what happens. And it's not unique to them. This happens all the time.

Mark Graban: So the five principles of dynamic work design, there are a lot of parallels, I think, to Lean or Toyota Production System practices. Your co-author Don talks about direct experiences with the famed Mr. Ohba from Toyota and TSSC. His son, Hide, has been a guest on this podcast, by the way, reflecting on what he learned from his dad. So very well known in these circles. But “Solve the right problem,” “Structure for discovery,” “Connect the human chain,” “Regulate for flow,” and “Visualize the work,” as those five principles. I think to the listeners here, flow and work visualization are probably really straightforward. So maybe if we can dig into one of the other principles. Tell us a little bit about what you mean by connecting the human chain and what that leads to.

Nelson Repenning: So let me just give a preamble to this by saying that the way Don and I have looked at this is that Lean is a really good example of a dynamic work design. So obviously Lean was invented long before we came up with the phrase. In some sense, what we think we are doing in this book is trying to take a bunch of different work design innovations, lean being a very important one, and generalize why they work. And so you can think of the book, if you want, or those principles as kind of a Rosetta Stone that would help us translate, what do we do in lean versus what do we do in Agile versus what do we do in Six Sigma or whatever? Because I think there are both similarities and differences across all of them. And so sometimes people will hear us say, “Well, that's just lean,” and I think that's in fact exactly true. But in order to take what you would learn on a shop floor and move it into a hospital, we need to know why these things work so that we can make the appropriate modifications.

So with that, the “Connect the human chain” principle–the basic, there's a couple, it's probably the one that has the most richness to it, and there's a couple things going on. I think the first thing that's going on there is, and this is I think very much a lean idea, is that particularly off the shop floor, the organization is not often well-wired together to get problems to the right place. So, a classic example, I see a problem, I pull the Andon cord, I hit the button, the supervisor comes running over. There's a whole, very clearly structured system to make sure that problem gets to the right place.

We were talking about healthcare earlier. A nurse or a physician's assistant comes across a problem with a patient. If it's life-threatening, they're going to push the button, we're going to call a code. That human chain is pretty well connected. But if they're just a little uncomfortable with, “Geez, I don't know, Mrs. Jones doesn't look good,” or “they're a little pale,” or whatever, it's often enormously unclear where I should put this data. Where is it supposed to go? Who am I supposed to call? And anyone who's ever had a loved one in a hospital knows it can be incredibly frustrating to get the right expert or the right help. So I think the first idea in “Connect the human chain” is let's just make sure the signals go to the right places at the right time.

And a nice analogy that we use is, if you think about the fire alarm in your house or your office, if I pull the fire alarm, it doesn't go to a random fire department in the greater Boston area, and they don't call me back and give me a lecture on fire prevention. It goes to the local fire department of Cambridge. They come out, they put out the fire. After that's done, okay, then maybe we'll have a discussion about whether I need extinguishers or more smoke detectors or whatever. So I think that's part one.

And then I think the second part of the “Connect the human chain” principle, and this is something we've gotten a lot of gains out of, is making sure that you put the face-to-face human connection in the right places. And there's a very provocative version of this, which is I've come to believe that most organizations are using electronic communication like email and Slack and so on, and meetings almost exactly backwards. Because if you read the literature on communication, what is a face-to-face meeting good for? It's really good for processing ambiguity and uncertainty. And if you want a kind of more practical version of that, it's good for basically solving problems and making decisions. When you try to do that kind of uncertain, high-bandwidth communication via email or Slack or whatever, what you get, which we've all experienced, is endless iteration. We go back and forth and the email chains get longer and longer and longer, and you have 400 Slack threads with sub-threads and all these different conversations. Most of those things would be far more efficiently done if you and I could just get in the room for 15 minutes or on a Zoom room or whatever and talk it out.

So the other part of “Connect the human chain” is, as you design a process of getting work done, physical or knowledge work, can you be really clear about where there is uncertainty to be processed and make sure that you put the face-to-face communication or put the meetings there? And the reason I say that we use them almost often exactly backwards is think about the kind of cartoon of bad organizational practice. I go to an hour-long meeting, 55 minutes of it is watching someone click through slides, which my eyes kind of glaze over. Then you and I are passing notes or texting each other like, “Well, if this is the strategy, we're going to have to change everything.” And then no one asks a question because there's no time for it because you got to get through all the slides. And then in the hallway on the way back, you and I are then trying to figure out the 5,000 changes we're going to have to make to adapt to this new strategy. But now it's happening via Slack and email and you get these long messages. The world would probably be much better if we talked about the new strategy for 10 minutes and then in the face-to-face meeting, “Okay, now let's work out all the things that we need to change.” So it becomes a two-way conversation rather than one. And so to give you a very practical version of that, we have seen many situations where a well-designed huddle for 15 minutes, in the right place in the process, can take hundreds of emails a day out of people's inboxes just by putting that uncertainty management in the right places.

Mark Graban: One other thing I wanted to ask is, you could look at companies that are successful and look at what they do and how they do it, how leaders behave and what the systems are. And then there's the challenge of helping others learn from that, whether that means adopting or adapting or inventing your own thing. Do you have a favorite example of an organization that you or Don worked with where either these five foundational principles or a variation of them led to some great success?

Nelson Repenning: So let me just preface every question with a preamble. In the book we call these five things principles, and we chose that name very deliberately because I think often what happens in the management world is that people spend a lot of time documenting best practices. So this is what Toyota does. They have cords and they have lines on the floor and they do calisthenics and all the usual things. And the presumption is that if I understood the practice, I could just plop that down in my organization and use it wholesale and I would be successful. And I think there's just ample evidence that that doesn't work. And I think the reason it doesn't work is that every organization's a little bit different. They have a little bit different culture and norms. In some cases, think about taking lean from factories to healthcare, right? That's an enormous leap, a very different environment. And so the reason that we use the word principles is we want people to understand a little bit about why these practices work. And then the idea being if you know why it works, then hopefully you can make smart adaptations to your world so that it will be consistent with your norms and culture and you'll actually get some benefit from it.

I have lots of examples where this happens, but I'll give you my favorite one because I think it's very clear and fun. We were talking about the “Regulate the flow” principle in class, and all the lean people in the world will recognize this as essentially a pull-style system where we only let work into the system when it's ready for it and we focus on flow and so on and so forth. Maybe 10 years ago, I had a student in my project class who was a heart surgeon. And he's watching us talk about factories and moving samples through gene sequencing and so on and so forth. And for his project, he said, “I think in my hospital we run a push system for scheduling patients.” And I was like, “Well, Abbe, what do you mean by that?” And he's like, “Basically the flow is very simple. Patients come into the operating room, they have whatever procedure we might do. And then the next stop is the intensive care unit. And then once they've gone through the ICU, then they move into the regular floors. And every day we do count the number of beds that we think will be available in the ICU, but we actually start cases in the morning, usually at 7:30, before we know how many beds are available.” And so the problem that he was actually solving in his project was, sometimes I finish a case and my patient has to sit in the hallway–this is sort of the medical version of duct tape and safety pins–until a bed opens up in the ICU.

And so he created a really simple pull, regulate the flow-style system, which is they moved what they called the bed flow meeting, which was essentially counting the number of beds in the ICU. We're going to do that at 5:30 AM instead of 9:30 or whatever they did it. And then we're only going to start as many cases as we have beds. Super simple, right? Way simpler than on the factory floor. But basically, to use familiar language, the ICU is the bottleneck and we are going to schedule off the bottleneck or the constraint. And I think what makes this case so great is that once you did that, by starting fewer cases, you actually got more done. Because it turns out that if there's a delay from going from OR to ICU, a patient is more likely to experience complications, which means they stay in the hospital longer. So their capacity actually went up, they cut about seven-tenths of a day off of the average stay in the intensive care unit, and they got about a 20% cost reduction, which in the world of healthcare is just–almost every metric is going the other way. You know, for me it was just a really high-water moment in terms of teaching. And Abi Manji, who was the student that did this, was really smart on his part to map the principle that we were talking about in these very different contexts and understand how it applied in his hospital. Now, he didn't paint lines on the floor or create inventory boxes for the patients or whatever, which, you've been in this world as long as I have, I mean, crazier things have happened. He understood the basic idea and then was able to come up with a solution for his hospital that worked really well.

Mark Graban: Well, and it sounds like they were focusing on solving the right problem or a very important problem. And one of the things in my experience with system dynamics is the unintuitive, non-obvious cause and effect relationships that we could try to model or at least try to understand as it's happening. Because I think if people fall into the trap of overly simplistic countermeasures, there's that kind of overly simplistic prediction of what's going to happen. Because some might look at it the other way and say pushing the patients into the hallway creates pressure for those people to open up the beds. And there's that blaming of like, they're somehow being lazy and don't want to accept a new patient because it's toward the end of the shift or all these things I've heard thrown at the people working, instead of, as you're describing, looking at the work system. And what's worse in a certain condition, delaying the surgery by a day or being in a hallway or having that next step delayed? Which causes the most harm? That's probably something you could model if you had good data and people that were informed about the medical effects, not just the process.

Nelson Repenning: Right. They, in fact, there were data from other studies that the longer the delay between OR and ICU, the more complications you were going to have. So they knew that, but they couldn't put it in the context. And I do think this leads to a really counterintuitive insight, which I see executives struggle with a lot, which is, if you understand the principles of flow, you kind of get the idea that in many cases, by starting less, you get more done. And I think you're exactly right. People just have a really hard time with that because they feel like if I don't keep putting a lot of pressure in, people will slack off and so on and so forth. But in fact, what was happening, right? They had their own version of expediting, which is powerful doctors wouldn't want their particular patient to wait in the hallway. And so they would be lobbying the ICU nurses and docs, “You can't, you just get Mr. Jones in there,” because it was a particularly invasive case or something like that. Which of course was just a dead weight loss to their productivity and added chaos. So yeah, you're exactly right.

Mark Graban: You're making me think of something I was involved in right after graduating from MIT in 1999. Dell computer, which had very good flow. There were certain things they did really well. Not a quote-unquote lean culture, but the one thing they learned–it's making me think–was that their old process had been to start computers based on the most urgent orders and looking at the due date, the delivery date. And then they would start stuff into the system that they then realized there were missing parts. And so then boxes full of a chassis and a mishmash of parts and missing parts would get literally pulled off the line and stuck somewhere. And they called them–the terminology had evolved–flappers because the box lids were not shut because the work wasn't finished. And one of the things that a number of us from MIT graduates were involved in–not that it was rocket science, but we were there working with other Dell people–was the idea of don't start building the computer until you're convinced you can actually build the whole computer. And that meant some technology looking at inventory visibility and doing the math of, if you only have 20 of this particular hard drive, only start building 20 computers that call for that hard drive. By starting less, they were actually improving flow through the factory. Now, some customers were unhappy, but they were going to be unhappy one way or another.

Nelson Repenning: And then hopefully with the new system you could see the problem and then someone could go like, “Okay, now we need more hard drives or whatever.” It's funny, Jay Forrester used to tell me this story, I think it was from Cummins engine from, I would imagine it was the sixties, that their version of this, they called them “basket cases,” which was engines would go down the line, but it would just be a big basket with all the parts except the engine block. And then it would go into this hospital or rework bin where once the engine block would show up, some tech would then bolt the parts on or whatever. So, job security for you and me, I guess. But it'd be nice if we could make some progress on these problems.

Mark Graban: Well, and I think this book is going to help people with those problems. Again, it's There's Got to Be a Better Way. And I think listeners here know there are better ways, but there are the challenges that you've talked about and addressed of how do we help people follow lessons and principles and practices that are proven. But I think the book makes a real good case. Again co-authored, Nelson Repenning our guest today and Don Kiefer. A lot of great endorsements on the book, including two former guests on this podcast. So Amy Edmondson who I mentioned, and Robert Sutton, author of books including The No Asshole Rule. Those two endorsements, among others. Adam Grant, Susie Welch, in particular. So, I hope the book's getting a good reception so far. You've done a lot of academic writing. Is this your first effort…

Nelson Repenning: I've done a couple of first book-like, practitioner-ish papers, but certainly the first book that is really aimed at the practitioner audience. And when people ask me what it was for, I said it was for managers that go through airports. So if you see it in an airport, hopefully you can grab a copy and tell me if you like it.

Mark Graban: And then, real quickly, the imagery of the butterfly busting through the glass.

Nelson Repenning: So it's interesting, the cover that I wanted, which shows why I'm not in marketing or any kind of graphics, was a still from the Charlie Chaplin movie, Modern Times. I don't know if you've ever seen it where he's stuck in the gears. But our editor, I think wisely decided that the idea that with the right choices, relatively small changes or applications of force could make a big difference. And so I think that's where the butterfly metaphor came from. And it turns out it's visually much catchier than Charlie Chaplin getting chopped up by the gears.

Mark Graban: It is eye-catching. And thank you for the story behind the imagery there. So there will be links to the book and more information in the show notes. So I hope people will check that out. So Nelson, it's really great reconnecting with you after all this time. I will correct the record. I was a student. I think you used the word great student. I don't know if that's–you're being too kind. I remember, this is one of those moments, so it sticks in my mind of, you have this pressure in class. Here's a different system. I'm sure you've talked to students about this, where you're graded partly on, you got to raise your hand and speak up, even us introverts and quant people at MIT. And there was a cue of, there's something I want to say about this case, and I raised my hand. And then some amount of time had passed and I started thinking about something else and you called on me and I just completely froze. I'm sure you don't remember this.

Nelson Repenning: I do not.

Mark Graban: To me it was a little mortifying and I'm like, I don't remember what I was going to say. I basically just said, “Pass.” And I'm sure that wasn't a positive check next to my name by the teaching assistant, but I still graduated, so.

Nelson Repenning: You seem to have survived it pretty well, so I'm not too worried about it, and I do not remember it.

Mark Graban: I'm probably not the only one who froze when called upon in class. But anyway, on that note, thanks for making me reminisce about these things and thank you for coming on the podcast.

Nelson Repenning: My pleasure. Thank you for having me. It's been great and really fun to talk about it, so thank you.


Please scroll down (or click) to post a comment. Connect with me on LinkedIn.

Let’s build a culture of continuous improvement and psychological safety—together. If you're a leader aiming for lasting change (not just more projects), I help organizations:

  • Engage people at all levels in sustainable improvement
  • Shift from fear of mistakes to learning from them
  • Apply Lean thinking in practical, people-centered ways

Interested in coaching or a keynote talk? Let’s talk.

Get New Posts Sent To You

Select list(s):
Previous articleDelta’s $70,000 Slide Mistake Shows Why “Human Error” Is Really a System Problem
Next articleGhosts, Zombies, and Frankenstein Processes: A Lean Halloween Reflection
Mark Graban
Mark Graban is an internationally-recognized consultant, author, and professional speaker, and podcaster with experience in healthcare, manufacturing, and startups. Mark's latest book is The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation, a recipient of the Shingo Publication Award. He is also the author of Measures of Success: React Less, Lead Better, Improve More, Lean Hospitals and Healthcare Kaizen, and the anthology Practicing Lean, previous Shingo recipients. Mark is also a Senior Advisor to the technology company KaiNexus.

LEAVE A REPLY

Please enter your comment!
Please enter your name here