What Medicare’s AI Pilot Means for Risk Adjustment —Where ForeSee Fits In

Reduce clerical bottlenecks for coders and providers with AI - Request a demo of ForeSee Medical

Beginning this month, Medicare will test a new program that could significantly change how care is approved for millions of seniors—and how artificial intelligence in healthcare is governed inside federal programs.

The Wasteful and Inappropriate Services Reduction (WISeR) Model, launched by the Centers for Medicare & Medicaid Services (CMS), allows private technology vendors to use AI tools to review and approve—or deny—certain Medicare services before they are delivered. The pilot will operate in six states and is scheduled to run through 2031.

CMS describes WISeR as a cost-control and program integrity initiative. But the model raises broader questions about risk adjustment accuracy, regulatory compliance, and AI oversight across the Medicare ecosystem.

A new form of utilization management in traditional Medicare

For decades, traditional Medicare largely avoided prior authorization, relying instead on post-payment audits and clinical judgment. WISeR marks a shift by introducing front-end utilization controls, a practice more commonly associated with Medicare Advantage and commercial insurance.

Under the model, vendors are paid based on how much Medicare spending they reduce—including savings from denying services deemed unnecessary or non-covered. While CMS says final decisions will be made by licensed clinicians rather than medical algorithms, AI-driven screening plays a central role in flagging services for review.

That distinction matters. Prior authorization has long been linked to delayed care, inappropriate denials, and added administrative burden. WISeR extends those dynamics to a population that has historically been shielded from them.

Why this matters for risk adjustment and documentation integrity

At first glance, WISeR appears focused on utilization management rather than risk adjustment. In reality, the two are closely connected.

Risk adjustment depends on accurate, complete, and timely clinical documentation. When care is delayed or denied:

  • Diagnoses may never be documented

  • Chronic conditions may go untreated or under-treated

  • Clinical encounters that support HCC capture may not occur

Over time, these gaps can distort RAF scores, understate patient acuity, and introduce downstream compliance risk—especially in value-based and shared-risk arrangements.

There is also a structural tension. AI systems trained to identify “low-value” services may not fully account for patient complexity, comorbidities, or longitudinal risk profiles. Without strong governance, those tools risk oversimplifying clinical reality—precisely the opposite of what risk adjustment is designed to reflect.

Compliance risk moves earlier in the workflow

Reduce clerical bottlenecks for coders and providers with AI - Request a demo of ForeSee Medical

Traditionally, Medicare compliance has focused on retrospective review: audits, RADV, medical necessity validation, and payment integrity checks after care was delivered.

WISeR shifts that risk upstream.

The question is no longer just whether care was documented and coded correctly—but whether it was allowed to happen at all.

This creates new compliance challenges:

  • How AI models are trained, validated, and updated

  • How Medicare coverage rules are translated into algorithmic logic

  • How denials are monitored for bias, accuracy, and consistency

  • What safeguards exist to prevent profit-driven decision-making

While CMS has said vendors will be evaluated on accuracy and turnaround time, details on transparency and auditability remain limited.

AI oversight becomes the real test

WISeR is not just a utilization experiment—it is an AI governance experiment inside one of the largest healthcare programs in the world.

As Medicare increasingly relies on automation, the success of WISeR will hinge on:

  • Clear clinical accountability

  • Transparent decision logic

  • Strong appeal protections

  • Continuous monitoring for inappropriate denials

Without those controls, AI risks becoming a blunt cost-cutting tool rather than the precision instrument it was designed to be for improving care quality and integrity.

 
 

Where ForeSee Medical fits in

As programs like WISeR push AI earlier into Medicare decision-making, one thing becomes clear: not all healthcare AI is created equal.

While utilization-focused AI is designed to filter services out, ForeSee Medical is built to protect documentation integrity, compliant risk capture, and clinical accuracy. ForeSee’s AI analyzes EHR data and clinical notes in real time to surface supported diagnoses, close documentation gaps, and strengthen HCC capture—before care decisions are delayed, questioned, or denied.

In an environment of increasing AI oversight and compliance scrutiny, organizations that invest in transparent, clinician-aligned, compliance-first AI will be best positioned to succeed.

That’s where ForeSee Medical delivers its value.

see how it works
 

Blog by: The ForeSee Medical Team