95% of Enterprise AI Pilots “Fail”–Just Like Lean? Not So Fast

5
0

Every few years–or let's be honest, quite often on social media–we see a statistic making the rounds: “70% of Lean initiatives fail.” It's usually presented as an indictment of the methodology. And quite often, the person sharing it seems to imply: “My initiatives don't fail, but most of yours do.”

Now, it's AI's turn to take this sort of beating.

A recent MIT report, as cited by this Fortune article, claims that 95% of enterprise generative AI pilots are failing to deliver measurable business results. That's an attention-grabbing number. But is it a technology problem? Or, like we've seen in Lean, is it a leadership problem?

As the legendary Lee Corso would have said on College Gameday (his last show was Saturday):


Because in both cases–Lean and AI–it's worth asking:

  • Are we declaring failure too soon?
  • Are we learning from early setbacks, or walking away from them?
  • And are our goals even realistic in the first place?

Mistaking a Lack of Instant Results for Failure

The MIT researchers don't claim the AI models are broken. In fact, they emphasize that the real gap is in how companies adopt and integrate the technology. The findings show that:

  • Many companies rush to deploy tools without aligning them to actual workflows.
  • Most AI budgets are spent on high-visibility applications like sales and marketing–even though the best results come from more operational areas like back-office automation.
  • Line managers aren't empowered to lead adoption.
  • And internal builds (versus vendor partnerships) tend to fail more often.

Does that sound familiar? Again, “Lean failures.”

This isn't a technology failure–it's an organizational failure to experiment, iterate, and learn.

And that reminds me a lot of how some organizations have mishandled Lean.

I've walked into hospitals that proudly launched Lean “transformations” after touring another organization. Inspired by what they saw, they came home and installed dozens of Kaizen boards–laminated, formatted, and perfectly mounted.

But the boards were empty.

When I asked how long they'd been up, I hoped to hear, “We just put them up yesterday.” Instead: “A few months.” Nobody was using them. Nobody had been trained. Nobody had been asked what problems they faced. The assumption was that copying the tools would lead to engagement and results.

When that didn't happen, the boards came down. The internal Lean team got restructured–or laid off entirely. The organization moved on to the next initiative.

That wasn't a Lean failure. That was a failure to lead.

What We Call “Failure” Is Often Just Incomplete Learning

In The Mistakes That Make Us, I wrote about the difference between making a mistake–and learning from it–versus sweeping it under the rug and moving on.

“Mistakes are inevitable. But failure is not.”

Too often, especially in corporate environments, we treat any deviation from the plan as a failure. If a pilot project doesn't deliver 10X returns in 90 days, we declare it a bust. We forget that real change–whether in culture, operations, or technology–rarely comes from a single “big win.” It comes from deliberate cycles of trial, feedback, and adjustment.

In other words, it comes from learning.

That's why Kaizen, when practiced properly, is so powerful. It's not just about improvement ideas–it's about creating a system where people can try small experiments, reflect on what happened, and build better habits over time.

In Lean, we don't assume every idea will work. We assume that learning what doesn't work is just as valuable as learning what does. The same should be true for AI–and any meaningful transformation.

Are the Goals Realistic?

Another question we rarely ask: Were the goals reasonable to begin with?

In the AI world, we hear promises like “AI will replace 30% of the workforce” or “drive $100 million in new revenue.” When those outcomes don't materialize, we label the initiative a failure.

But maybe the mistake wasn't in the pilot–it was in the expectation.

In Measures of Success, I talk about the importance of understanding variation and context. We can't judge the success of a new system by looking at two data points. We need time, patterns, and a sense of what “normal” looks like.

If your improvement initiative–or your AI pilot–doesn't move the needle after six weeks, it doesn't mean it's not working. It may mean you need to adjust the approach, look deeper at the process, or allow more time for adoption.

Declaring something a failure because it didn't meet an arbitrary target is a mistake in itself.

The Missing Ingredient: Psychological Safety

Both Lean and AI require change. And change–especially when it's uncertain or disruptive–requires psychological safety.

If employees feel unsafe, judged, or penalized for struggling with new tools or processes, they'll disengage. That's true whether you're asking people to contribute to a Kaizen board or pilot a new AI system.

In one of the AI case studies, a CEO mandated that every Monday be “AI Monday.” No meetings, no customer calls–just AI work. When staff didn't get on board fast enough, he replaced 80% of the team.

That's not transformation. That's intimidation.

Real innovation happens when people feel safe to try, to ask questions, and yes–to make mistakes. We saw this at Allina Health, where frontline staff were invited to solve problems they encountered daily. Leaders didn't dictate solutions; they coached, encouraged, and removed barriers. The result wasn't just better processes–it was a culture of ownership.

That's what Lean looks like when it works. And it's what AI could look like, too–if we lead it well.

People need to feel safe saying things like:

  • That goal seems unrealistic
  • I don't agree with our approach
  • This isn't working out
  • We need to adjust

Iterate Instead of Abandon

So what should organizations do when a Lean effort or an AI pilot “fails” to deliver immediate results?

Instead of abandoning it:

  • Ask better questions. What did we learn? What got in the way? What surprised us?
  • Adjust the plan. Maybe the scope was too big, or the tool didn't match the workflow.
  • Support the people. Are employees trained? Do they feel heard? Are managers coaching or commanding?
  • Keep experimenting. Small, safe-to-fail experiments build knowledge and confidence.

In other words: embrace the Lean mindset–even if the initiative isn't labeled “Lean.”

Final Thought

When we hear that 70% of Lean efforts or 95% of AI pilots are failing, we should pause–not to dismiss the number, but to interrogate what's behind it.

Is the methodology flawed? Or is it the implementation?

More often than not, it's the latter.

The real failure isn't when something doesn't work the first time.

The real failure is when we stop learning.

And that's something we can choose to change–every single day.


Please scroll down (or click) to post a comment. Connect with me on LinkedIn.

Let’s build a culture of continuous improvement and psychological safety—together. If you're a leader aiming for lasting change (not just more projects), I help organizations:

  • Engage people at all levels in sustainable improvement
  • Shift from fear of mistakes to learning from them
  • Apply Lean thinking in practical, people-centered ways

Interested in coaching or a keynote talk? Let’s talk.


Join me for a Lean Healthcare Accelerator Trip to Japan! Learn More

Get New Posts Sent To You

Select list(s):
Previous articleDon’t Repeat Our Mistakes: Dale Lucht’s Leadership Habits for Lean
Mark Graban
Mark Graban is an internationally-recognized consultant, author, and professional speaker, and podcaster with experience in healthcare, manufacturing, and startups. Mark's latest book is The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation, a recipient of the Shingo Publication Award. He is also the author of Measures of Success: React Less, Lead Better, Improve More, Lean Hospitals and Healthcare Kaizen, and the anthology Practicing Lean, previous Shingo recipients. Mark is also a Senior Advisor to the technology company KaiNexus.

LEAVE A REPLY

Please enter your comment!
Please enter your name here