Risk Adjustment

Using Technology to Reduce the Risk of RADV Audit


The use of analytics for the proactive review of and oversight into coding and submission processes has become more critical than ever. Recent reports by the Office of the Inspector General (OIG) have sounded the alarm on billions of potentially improper payments to Medicare Advantage organizations (MAOs) primarily due to unsubstantiated or non-compliant diagnoses. The Centers for Medicare & Medicaid Services (CMS) estimates that such practices could account for nearly 10% of payments made to these organizations. This comes at a time when Medicare is facing growing solvency and affordability challenges, as the number of enrollees, and spending, continues to rise.

These factors combined have prompted a major crackdown on inaccurate or fraudulent risk adjustment scores. In addition to CMS increasing the number of risk adjustment data validation (RADV) audits it performs each year, OIG has begun its own targeted audits aimed at diagnoses that fail to comply with federal regulations. Preparing for and undergoing an audit is an enormous task with significant consequences. Health plans may see a reduction in monthly CMS payments up to 3X the government’s damages caused by the violator, and a civil monetary penalty from $5,500 to $11,000 for each false claim. Lawsuits and negative media attention can also damage an organization’s reputation and brand and hurt their ability to attract and retain members.

With this increased level of scrutiny, using analytics for the proactive review of and oversight into coding and submission processes has become more critical than ever. But instead of only looking for undercoding or gaps, health care organizations need to look for overcoding as well. In this landscape, even plans that did not think they were on the radar for RADV may now be at risk — and all plans should prepare for some kind of audit each year.

OIG Audit Techniques: What You Need to Know

OIG is increasingly using data analysis to audit noncompliant codes, and recently demonstrated in small pilots that it can easily uncover these codes using data alone. The graph below represents one such audit in which the agency targeted seven key areas. The first three — acute stroke, acute heart attack, and acute stroke and heart attack combined — were geared toward finding acute conditions documented in a provider’s office without record of an inpatient stay.

The next three — embolism, vascular claudication, and major depressive disorder — pertained to diagnoses documented by providers in the absence of medication, or in the presence of a different medication than would have been expected. For embolism, OIG was looking for this condition without an anticoagulant. For vascular disease, it was looking for this condition with a medication that suggested it should have been a different condition. For major depression, it was looking for this condition without a prescribed antidepressant.

The final analysis was around ‘fat finger’ diagnoses, which OIG was very easily able to find simply by looking for common ‘flip flops’ and then comparing them against other sources of clinical information to determine if the code that could have been transposed was not actually valid. Across the board, you can see a 60% hit rate (86% for some codes) on these studies. Going forward, we expect OIG to scale this pilot program and use it as a model for how it will assess health plans in the future.

Compliance-Focused Chase Analytics

The good news is that OIG has published the results of this study. When building chase lists, health plans should be sure to include these compliance red flags, retrieving charts from providers that have coded these unlikely conditions and then validating the diagnoses in question by coders. To this end, Episource has built our own Compliance Pack using these rules along with additional rules we’ve gathered from other OIG publications.

For the example of acute stroke, we found this condition flagged as non-compliant 21 per 1,000 times, telling us we should be pulling those charts specifically and assessing them for overcoding. In addition, there is also a list of other high-risk HCCs like heart attack where you wouldn’t expect to see these conditions without an inpatient stay. While some codes with very high hit rates, like acute stroke, may be filtered at the point of submission, others need to be checked and validated by coders. To enable this, analytics tools need to include charts containing suspected non-compliant codes when building chase lists.

Retrospective Reviews: Using Tools to “Look Both Ways”

When we talk about analytics, we’re generally referring to upside analytics. How is member RAF trending? Where are there documentation gaps for chronic conditions? Where do we see clinical suspects? Which charts do we need to chase? These are all important questions, but that same analytics process can and should be done for compliance as well — and this means ‘looking both ways’ to remove unsubstantiated codes.

Traditionally to do a legitimate two-way review, you had to code everything in the chart. This was much slower and more expensive than year-wise capture. The problem with this strategy is if there are multiple instances of overcoding in a single PDF or for a single member, year-wise capture may indicate one of those codes as a delete, but not necessarily every single one. On the other hand, If you’re doing encounter-wise coding, these two-ways reviews are slow and much more expensive because they take a great deal more effort.

NLP has sped up coding, but often by taking an upside only view (i.e., only having coders review new codes). This does not allow you to catch any deletes, nor does it solve the problem of capturing multiple instances of one code that hasn’t been deleted. In this new landscape, NLP can and should be used to look both ways, allowing coders to see which claims codes are in the chart to assess for overcoding, showing the results of work both upside and downside.

The way we solve this issue in our NLP SaaS coding tool at Episource is by showing the coder which codes are in claims and which codes are not.

This allows coders to:

  1. Elect to skip over codes that are in claims and substantiated in a chart
  2. Capture codes that are not in claims that are substantiated in the chart
  3. Delete codes from claims that are in the chart but are not substantiated

Retrospective Reviews: Add/Delete Outcome Results

Episource does two-way reviews across many different plans and tends to find more adds than deletes across the board. This translates to 2.5 to 3 codes that need to be deleted per every 100 members. When you multiply that across a large program, you can see that there is a lot of inaccurate provider coding that needs to be removed, as OIG can find them that quickly.

So, although we’re seeing a fairly large net overall increase in HCC capture — and the net RAF impact is positive — those deletes are frequent enough that it has to be part of a compliant risk adjustment program.

Closing the Loop: Improving Future Documentation

To close the loop, we need to take it from coding back to the provider. Many of our clients have provider education and clinical documentation improvement (CDI) programs. However, most of those are “risk adjustment 101.” For example, something as simple as reminding providers to write the word “morbid” in front of obesity. But what we really need to be doing is training them on that handful of codes that shouldn’t be in the chart at all.

The same data and tech processes that help shape provider education programs can also be used to identify the specific codes and providers that require the most help. It’s very easy to find the frequency of deletes by provider and HCC code and deliver practice-wise custom training for provider groups. But if we don’t provide that training and just make training programs about documentation completeness only, we may actually be adding to the problem, as providers may think the risk is only about missing codes.

Reducing the Risk of RADV Audit

Ensuring compliance requires a data-integrity mindset across the entire risk adjustment lifecycle, from prospective to retrospective. With this approach, the focus should be about trying to submit fully accurate documentation more than just looking for gaps. To do this, we need to employ the tools being used to make risk adjustment faster and better to ensure data integrity.

As OIG and CMS continue to increase their focus on MAOs, these organizations will need to shift their processes to look both ways and get ahead of compliance audits before they happen — and partnering with the right vendor can be a key part of this process.

 

For overburdened payers and providers, Episource helps close gaps in healthcare by marrying expert guidance with an end-to-end risk adjustment platform. Learn more about our solutions at Episource.com.

Similar posts