The Metric Nobody Tracks
Ask a health plan’s risk adjustment team how many codes their retrospective program added last quarter, and they’ll have the number instantly. Ask how many codes the program removed, and you’ll get silence, a vague estimate, or a confession that they don’t track it. That gap tells you everything about how the program is designed and how much regulatory exposure it carries.
The delete rate, the ratio of codes removed to codes added during retrospective chart review, is the single most revealing metric for assessing whether a program operates as a compliance function or a revenue function. A program with a zero or near-zero delete rate is, by definition, an add-only program. And add-only programs are exactly what OIG’s February 2026 guidance identified as high-risk and what the DOJ’s Aetna ($117.7 million) and Kaiser ($556 million) settlements targeted.
What a Healthy Delete Rate Looks Like
There’s no industry benchmark for delete rates because the industry has never been required to report them. But the logic is straightforward. If a program reviews thousands of charts and finds codes to add but never finds a single code that should be removed, that result is statistically implausible. Clinical conditions resolve. Documentation quality varies. Prior submissions contain errors. History-of conditions get carried forward inappropriately. Any honest, thorough review of a large chart population will identify codes that can’t be supported.
A program that produces a meaningful delete rate, one where removals represent a material percentage of total coding activity, demonstrates something important: the program is looking in both directions. It’s not optimizing for volume. It’s evaluating documentation quality across the full set of submitted and potential codes. That’s what CMS means when it describes supplemental data submission as a two-way street.
The specific ratio will vary by plan, by member population, and by the quality of prior submissions. What matters isn’t hitting a target number. What matters is that the number isn’t zero. A zero delete rate is a red flag visible in internal data, in audit analytics, and in the population-level coding patterns CMS monitors.
Why Plans Don’t Track It
Most plans don’t measure delete rates because their programs weren’t designed to produce deletions. The technology identifies codes to add. The workflow processes additions. The metrics reward volume. Deletions, when they happen at all, occur as exceptions rather than as a structured part of the process. They aren’t tracked because they aren’t valued.
This is a leadership problem, not a technology problem. When leadership defines program success as codes added and revenue recovered, the entire organizational structure optimizes around those metrics. Coders focus on adds. Quality teams validate adds. Reports showcase adds. Deletions are invisible because the performance framework doesn’t recognize them.
Changing this requires executive sponsorship. The CFO needs to understand that a removed code protects more revenue than a submitted code that gets clawed back. The compliance officer needs to present delete rates alongside add rates in board reporting. The coding team needs performance metrics that value accuracy, with deletions recognized as a specific type of accuracy.
Making the Delete Rate Visible
The first step is measurement. Pull your retrospective program data for the last four quarters. Calculate the ratio of codes removed to codes added. If the number is zero, you’re running an add-only program in a regulatory environment that has explicitly identified add-only programs as high-risk. If the number is low but present, evaluate whether the deletions are systematic or incidental.
The second step is technology. Ensure your coding tools evaluate documentation for both additions and removals with the same evidence-based rigor. AI should flag unsupported existing codes with the same specificity it uses to identify missed diagnoses. The coder should see both sets in a single workflow.
Plans that make their delete rate a tracked, reported, and optimized metric are operationalizing the two-way standard CMS requires. Retrospective Risk Adjustment Coding programs measured on both directions produce the balanced coding profiles that regulators recognize as compliance-driven rather than revenue-driven. That distinction, visible in the data, is what separates programs that survive scrutiny from programs that generate settlements.
Read More https://eopis.co.uk/





