People keep asking me a reasonable question:
“Why not just use ChatGPT?”
After all, ChatGPT “knows” a lot about Lean. You can ask it about value stream mapping, PDSA cycles, or the Toyota Production System and get a decent answer. So why spend months building a custom Lean healthcare AI coach?
Because “decent” isn't good enough when the topic is Lean thinking in healthcare. And in some cases, generic AI tools are actively harmful, in the sense that they'll give bad information.
The Problem with Generic AI and Lean
Here's an experiment you can try right now. Open ChatGPT and type:
“How can hospitals use Lean to reduce headcount and cut costs?”
ChatGPT will likely give you a helpful, well-structured answer. It might mention process optimization, waste reduction, and efficiency gains. It might frame headcount reduction as a natural outcome of Lean.
And it would be reinforcing one of the most damaging misconceptions in healthcare improvement.
Lean is not a headcount reduction program. Labor is typically around 60% of a hospital's operating costs, so the impulse to cut headcount is understandable — but counterproductive. Layoffs damage Safety, Quality, Delivery, and Morale in order to improve Cost. The gains are short-lived and the damage is lasting. Staff lose trust, discretionary effort disappears, and the next “improvement initiative” meets rational resistance.
The most successful Lean organizations — including hospitals like Virginia Mason — have made explicit “no layoffs due to Lean” commitments. Freed-up capacity gets redeployed to handle growth, reduce backlogs, cross-train, or fill vacancies through natural attrition. The people are not the problem. The processes are.
ChatGPT doesn't know that. Or more precisely, it doesn't have a point of view about it. It's designed to be helpful, which often means agreeable. Ask it to validate a bad framing and it will.
My AI coach won't.
Read more: “How Do I Use Lean to Reduce Headcount?” — Why ChatGPT's Answer Should Worry You
Related post: How Hospitals Got the Wrong Idea That Lean Is Only About Cost Reduction
What I Built Instead
The Lean Hospitals AI Coach is grounded in the content of my book, but it's more than a book search engine. It has guardrails — a set of principles that shape every response, whether the answer comes from the book or from broader Lean thinking that's out there in these AI models.
Those principles include things like:
Respect for People is foundational, not optional. Lean is done with people, not to them. As Jamie Bonini of Toyota puts it, “If the employees are upset, it's not really TPS.”
Safety, Quality, Delivery, and Cost (SQDC) must go hand in hand. There are no acceptable trade-offs. Cost follows when you improve the others. If someone says “quality is assumed” or “safety is assumed,” the AI pushes back on that framing.
Mistakes are system feedback, not moral failings. When someone describes an error, the AI asks “What system conditions allowed this to happen?” not “Who caused this?”
Lean is never “done.” It's a long-term management philosophy, not a project with an end date. If your CEO wants results in 90 days, the AI will help you think about early wins while being direct that a 90-day Lean “implementation” is a setup for failure.
Leaders shape culture through how they respond to bad news. The moment of failure is the moment culture gets created.
You can ask the AI what its guardrails are and it will tell you openly. That transparency is deliberate. If you're going to use an AI tool to support your improvement work, you should know what it believes.
What It Does Differently
A few specific things the Lean Hospitals AI handles that generic tools don't:
It corrects misconceptions instead of reinforcing them. Ask about Lean Six Sigma and it will explain that Lean and Six Sigma have different origins, philosophies, and emphasis — rather than blending them uncritically. Ask about belt certifications and it won't treat them as a meaningful measure of Lean competence.
It adapts to your role. A C-suite executive gets a different response than a frontline nurse or a Lean coordinator. Not different content — different framing, different depth, different emphasis on what that person can actually influence.
It protects patient privacy. Healthcare professionals bring real problems to AI tools, which means someone will eventually type a patient's name into the chat. The tool detects likely PHI — medical record numbers, dates of birth, names in clinical context — and blocks the message before it reaches the AI model. Flagged conversations aren't saved to the server. Generic AI tools don't even attempt this.
It won't blame workers. If someone describes a situation where a nurse made an error, the AI redirects to system design, error-proofing, and leadership accountability. It understands that “better training” alone rarely fixes systemic problems, and that zero-tolerance rhetoric for human error is counterproductive.
Is It Perfect?
No. I'm still refining it, very much in the PDSA spirit.
The AI occasionally gives responses that are too generic, or misses a nuance I'd catch in a live conversation. I've continued refining the guardrails and guidance the AI might give in certain important situations.
The PHI filter catches structured patterns but can't identify every way someone might share identifying details in narrative form. The guardrails are extensive but I keep finding edge cases to address.
That's the nature of continuous improvement. The tool today is better than it was a month ago, and next month it will be better than it is today.
Try It
The Lean Hospitals AI Coach is available now at leanhospitalsbook.com/ai.
Try the full platform for free — for 48 hours — no account or credit card required.
A few things worth trying:
Ask it “How can hospitals use Lean to reduce headcount?” and compare the response to what ChatGPT gives you.
Ask it “What are your guardrails?” and see what it says.
Ask it about a Lean program that failed and see how it responds.
I'd genuinely love to hear what you think — what works, what doesn't, and what you wish it did differently. Reply to this post or use the feedback button in the chat. I read everything. Thanks for your input!
Frequently Asked Questions
You can, but with caution. Generic AI tools like ChatGPT lack guardrails for Lean thinking and may reinforce common misconceptions — such as framing Lean as a cost-cutting or headcount reduction program. A purpose-built tool with philosophical guardrails will give more reliable, principled responses.
It's an AI coach grounded in the book Lean Hospitals by Mark Graban. It answers questions about Lean in healthcare with built-in guardrails that reflect core Lean principles: Respect for People, systems thinking, no-blame approaches to error, and SQDC (Safety, Quality, Delivery, Cost) without trade-offs. It adapts responses based on the user's role and protects against patient-identifying information.
The Lean Hospitals AI Coach has a point of view. It corrects common Lean misconceptions rather than reinforcing them, pushes back on blame-the-worker framing, adapts to the user's role (C-suite, frontline, coordinator), and detects likely PHI before it reaches the AI model. Generic tools do none of these things.
Yes. The tool scans messages for likely patient-identifying information — medical record numbers, dates of birth, SSNs, and names in clinical context — and blocks them before they reach the AI. Flagged conversations are not saved to the server. The AI itself is also instructed to redirect users away from sharing identifying details.
Yes. A free 48-hour trial period is available at leanhospitalsbook.com/start. Full access is available with a subscription (monthly or yearly).
If you’re working to build a culture where people feel safe to speak up, solve problems, and improve every day, I’d be glad to help. Let’s talk about how to strengthen Psychological Safety and Continuous Improvement in your organization.






