January through March, we reviewed 47,000 encounters and found $8 million in missed HCCs. The CEO called it our best retrospective quarter ever. I called it proof that our retrospective risk adjustment process was broken beyond repair.
Here’s why: those encounters were spread across twelve months, but we found the same types of misses in January 2023 that we found in December 2023. Twelve months of identical failures. We weren’t getting better. We were just repeating the same month of mistakes twelve times.
The Pattern Blindness Problem
We review thousands of charts every quarter but never stop to ask: why did we miss these exact same HCCs last quarter? And the quarter before that?
I mapped our retrospective finds for two years. Shocking patterns emerged. We missed depression HCCs in 73% of psychiatry notes. Every quarter. We missed CKD progression in 81% of nephrology visits. Every quarter. We missed CHF specificity in 67% of cardiology encounters. Every. Single. Quarter.
The retrospective review would find these misses, celebrate the recovery, then miss them again next quarter. Like goldfish discovering the castle in their bowl, we were perpetually surprised by completely predictable failures.
March’s “wins” were January’s failures repeated. We’d find $2 million in missed depression codes, submit them, feel successful, then miss the next quarter’s depression codes exactly the same way. The retrospective review wasn’t fixing problems; it was just documenting them repeatedly.
The Whack-a-Mole Workflow
Our retrospective process had become medical whack-a-mole. Problems pop up, we hit them, they disappear briefly, then pop up somewhere else. Except it’s the same problems in the same places. We’re just too busy whacking to notice.
Take diabetes complications. Every retrospective review found hundreds of missed diabetic neuropathy codes. We’d capture them, submit them, move on. Next quarter? Same misses, same providers, same documentation patterns. The mole wasn’t moving. We were just bad at the game.
The workflow reinforced blindness. Coders reviewed charts in isolation, found missed codes, submitted them, and moved to the next chart. Nobody asked, “Why do Dr. Smith’s diabetes patients always have missed complications?” We treated every miss as unique when they were systematically identical.
Sarah in coding mentioned this once: “I feel like I’m coding the same missed HCCs for the same providers every quarter.” She was right. But we were too focused on volume metrics to listen. Finding the miss mattered more than preventing it.
The Prevention Revolution We Ignored
After that brutal realization, we changed everything. Instead of celebrating found codes, we started investigating why they were missed.
Every retrospective find now triggers a prevention protocol. Found missed depression codes in psychiatry? Don’t just submit them. Figure out why psychiatric notes aren’t being coded initially. Fix the root cause, not just the symptom.
We discovered embarrassingly simple problems. Psychiatry notes went to a different coding queue that was perpetually backlogged. Nephrology used templates that buried diagnoses on page six where coders never looked. Cardiology documented in acronyms that coders didn’t understand.
These weren’t complex problems requiring million-dollar solutions. They needed someone to notice patterns and ask obvious questions. But retrospective review rewards finding, not fixing.
The Feedback Loop That Actually Works
Now, every retrospective miss generates a prevention task. Not a report. Not a meeting. A specific action to prevent that exact miss next quarter.
Found missed CKD codes? The prevention task: train coders on nephrology terminology. Found missed CHF specificity? The task: create a cardiology documentation guide. Found missed depression severity? The task: route psychiatry notes to specialized coders.
We track prevention effectiveness obsessively. If we found 100 missed diabetic neuropathy codes in Q1, we better find fewer than 20 in Q2. If not, the prevention failed and we try something else.
The metric that matters isn’t how much we find in retrospective review. It’s how much we don’t find because we caught it prospectively. Declining retrospective finds means improving prospective capture. Growing finds means repeated failure.
Your Pattern Audit Exercise
Print your last four quarters of retrospective finds. Highlight every HCC type that appears in all four quarters. That’s your failure pattern. You’re not finding opportunities; you’re documenting consistent process breakdowns.
Group misses by provider specialty. If 80% of cardiology HCCs are missed prospectively, you don’t have a coding problem. You have a cardiology workflow problem. But your retrospective review won’t tell you that unless you look for patterns.
Track the age of found HCCs. If you’re consistently finding six-month-old misses, your prospective process has a six-month blindspot. The retrospective review isn’t early detection; it’s disaster recovery.
Here’s the uncomfortable truth: successful retrospective risk adjustment should find less value every quarter, not more. Growing finds means growing problems. If you’re celebrating record retrospective recovery, you’re actually celebrating record prospective failure.
Stop treating retrospective review as a revenue recovery exercise. Treat it as failure analysis. Every found code is evidence of a broken process. Fix the process, and the retrospective finds disappear. Keep celebrating the finds, and you’ll be finding the same misses forever.