A new report from the HHS Office of Inspector General (OIG), released in July 2025, found that hospitals failed to capture 49 percent of patient harm events among hospitalized Medicare patients. The report is titled “Hospitals Did Not Capture Half of Patient Harm Events, Limiting Information Needed to Make Care Safer.”
That number — roughly half — is actually an improvement. In 2012, the OIG found that hospitals missed 86 percent of events. So there's been progress. But the reasons hospitals gave for not capturing harm events tell a deeper story about how organizations think about problems, and whether they're set up to learn from them.
Why Events Were Missed
The most common reason hospital staff gave for not capturing harm events was that they didn't consider them to be harm at all. That accounted for 46 percent of missed events. Staff described these events as known complications, expected side effects, or part of the normal course of care.
Another 16 percent weren't captured because it wasn't “standard practice” to report them — meaning the hospital's own policies didn't require it. Some hospitals only tracked events that resulted in serious injury or death, or events on a specific list maintained by CMS, accreditation organizations, or States.
An additional 20 percent of missed events were difficult to distinguish from the patient's underlying disease.
What stands out to me is the pattern: hospitals applied narrow definitions of harm, and those definitions created blind spots. A harm event reportable at one hospital might not be considered reportable at another. Without a shared definition of what counts, it's hard to see the full picture — and you can't improve what you can't see.
You can't solve problems that aren't reported.
What Happened When Events Did Get Captured
Even when hospitals did capture events, the response often stopped short of system-level learning. Of the captured events for which the OIG had information, only 17 out of 48 were investigated.
And of those 17, only 11 led to any kind of improvement or process change.
The improvement actions hospitals reported taking were things like training staff and enhancing monitoring. Those aren't bad responses, but they're person-centered responses. They assume the gap was in the individual, not in the system that allowed the event to happen. As I've written about many times, retraining the individual after an error is often the go-to reaction, even when the original training wasn't what failed.
In Lean Hospitals, I share the story of a histology lab where three specimens were lost on the same day. The hospital's initial “root cause analysis” concluded that the technologist needed to be retrained. The lab director pushed back, went to the gemba, and found the real causes:
- short staffing,
- an artificial deadline, and
- no standardized process for communicating when the team was working shorthanded.
It wasn't a people problem. It was a system problem.
The OIG report describes a pattern that looks very similar — across an entire national sample.
Fear, Futility, and the “Why Bother?” Problem
The OIG report doesn't use the phrase “psychological safety,” but it describes conditions that are closely related. What struck me most was how many of the barriers to reporting sounded less like fear and more like futility.
Research by Ethan Burris, PhD, at the University of Texas at Austin, has identified two primary reasons employees stay silent: fear and futility. Fear is the classic psychological safety concern — people worry they'll be punished or blamed.
Futility is something different. It's when people have learned that speaking up, while not dangerous, just isn't worth the effort because nothing changes.
Related post: Fear and Futility: Why People Don't Speak Up-and How Lean Leaders Can Remove Both
The OIG findings lean heavily toward the futility side. It didn't sound like staff were afraid to report — they just didn't think the events qualified, or they'd learned that their hospital didn't treat those events as reportable.
And when events were reported, few were investigated, and fewer still led to changes.
That's a feedback loop that reinforces silence — not through fear, but through the slow erosion of “why bother?”
I've heard people in healthcare say it quite literally:
“I'm not afraid to speak up. It's just not worth the effort.”
That's not a courage problem. That's a system problem.
What the OIG Recommends
The OIG made three recommendations.
First, that AHRQ and CMS work together to align definitions of patient harm across the industry and create a shared taxonomy.
Second, that CMS ensure surveyors prioritize the QAPI (Quality Assurance and Performance Improvement) requirement when assessing hospitals.
Third, that CMS instruct Quality Improvement Organizations to help hospitals identify weaknesses in their surveillance systems.
I think we can do better than improved “surveillance” though — to find something more collaborative.
The first recommendation addresses a real gap. Without a common definition of what constitutes harm, hospitals end up with wildly different reporting thresholds. That's not a recipe for learning at scale.
The second and third recommendations are about accountability and support — making sure the infrastructure for tracking and responding to harm actually functions. AHRQ concurred with the first recommendation. CMS concurred with the first and third, and neither concurred nor nonconcurred with the second.
Where Does This Leave Us?
More than twenty years into national attention on patient safety, we're still in a place where half of harm events go uncaptured and most captured events don't lead to system-level improvements.
The numbers have gotten better. But the underlying dynamics — narrow definitions, person-centered responses, and the slow drift toward “it's not worth reporting” — are familiar patterns for anyone who's worked in healthcare improvement.
The OIG report is a systems-level finding about a systems-level problem. Hospitals aren't failing to capture harm because staff don't care. They're operating within structures — definitions, policies, incentive systems — that make it rational to miss things.
What would it look like if hospitals treated every harm event as a signal from the system, rather than a question of whether it meets a reporting threshold?






