As artificial intelligence (AI) continues to integrate into healthcare, federal legislators are evaluating strategies to safeguard patients while encouraging innovation. Many lawmakers in Congress are advocating for enhanced regulations to achieve this balance.
Senate Committee on Finance Chairman Ron Wyden, D-Oregon, highlighted the dual nature of AI technology during a recent legislative hearing focused on AI’s role in healthcare. He acknowledged the potential of these technologies to improve efficiency within the healthcare system but also raised concerns about the pervasive biases in some AI systems that may discriminate against individuals based on race, gender, sexual orientation, and disability. “It is evident that insufficient measures are in place to protect patients from bias in AI,” Wyden remarked.
Wyden emphasized the responsibility of Congress to promote positive outcomes from AI while establishing guidelines for new innovations in American healthcare.
Lawmakers are currently contemplating the extent of Congress’s involvement in balancing “innovation protection with patient privacy and protection,” particularly regarding federal programs like Medicare and Medicaid.
In this context, Wyden introduced the Algorithmic Accountability Act. This proposed legislation would mandate that healthcare systems routinely evaluate whether the AI tools they develop or choose are functioning as intended and not perpetuating harmful biases.
The discourse in Congress arises amid a wave of lawsuits against major Medicare Advantage (MA) insurers, alleging that they have employed AI algorithms to deny necessary care.
For instance, two patients have filed a lawsuit against Humana, claiming that the insurer used the naviHealth’s nH Predict tool to make decisions regarding coverage in long-term care. Similar allegations have been raised against UnitedHealthcare, which, along with Humana, stands as one of the largest players in the MA market. NaviHealth, currently a subsidiary of UnitedHealth Group’s Optum, is also facing legal challenges concerning its nH Predict algorithm, which reportedly offers rigid and unrealistic predictions regarding patient recovery needs.
These lawsuits come in the wake of a Stat investigation published earlier this year that scrutinized how MA plans might deploy AI technologies for claim denials. In response, the Centers for Medicare & Medicaid Services (CMS) issued a memo to insurers, clarifying the guidelines for AI usage. According to CMS, health insurance companies are prohibited from using AI or algorithms as the sole basis for determining coverage or denying care to MA plan members.
Coverage decisions must be made based on the unique circumstances of each patient. CMS asserted that any algorithm relying on broader data sets instead of considering a patient’s medical history, physician recommendations, or clinical notes would not comply with the regulations.
For example, CMS explained that while algorithms could assist providers in predicting the potential duration of post-acute care services, such predictions alone could not justify terminating those services. Moreover, AI or algorithms cannot be used solely to deny inpatient admissions or downgrade a patient’s stay; each patient’s situation must be assessed against applicable coverage criteria.
CMS expressed its concerns that “algorithms and many emerging AI technologies could exacerbate existing discrimination and bias.” Therefore, MA organizations must ensure that any algorithm or software tool implemented does not perpetuate or introduce new biases.
During the Senate Finance Committee hearing, Michelle Mello, Ph.D., a health policy and law professor at Stanford University, expressed her support for CMS’s recent memo addressing AI use in MA plans and its intention to enhance audits in 2024. However, she also stressed the need for further clarification regarding the proper and improper use of algorithms. Mello pointed out that in past initiatives related to electronic health records, CMS provided clear standards for what constituted “Meaningful Use.”
CMS has also specified in its final 2024 MA rule that care and coverage determinations should be reviewed by a qualified medical professional. Nevertheless, Mello noted that ambiguity still exists around the use of AI in healthcare decisions.
“The critical question is what constitutes meaningful human review,” she stated. “There was a situation involving another insurer that utilized a non-AI-based algorithm to deny care. Even with human review, the process averaged only 1.2 seconds, which raises concerns about its effectiveness.”
Mello further highlighted that the current CMS Final Rule lacks the specificity needed to guide plans on what meaningful human review entails. To encourage meaningful reviews, she urged that CMS audits focus closely on instances where algorithms were involved in care denials, ensuring transparency and analyzing patterns of denials and reversals.
In her testimony, Mello encouraged federal lawmakers to assist healthcare organizations and insurers navigating the “uncharted territory of AI tools” by implementing protective measures while allowing regulatory frameworks to adapt alongside technological advancements. She proposed that Congress fund AI assurance labs to develop consensus-based standards, ensuring that healthcare organizations with fewer resources can access the expertise and infrastructure necessary to evaluate AI tools effectively.
Mark Sendak, M.D., a leader in population health and data science at the Duke Institute for Health Innovation, echoed Mello’s sentiments. He urged lawmakers to facilitate investments in technical support, infrastructure, and training to move AI from theoretical applications to practical implementations in healthcare settings.
As a co-leader of the Health AI Partnership, Sendak and his colleagues are working to provide guidelines for high-resource organizations that are rapidly adopting AI technologies.
“Implementing these guidelines could be a requirement for hospitals wishing to participate in Medicare programs. However, these measures primarily benefit organizations that are already advanced in their AI adoption,” he noted. “Most healthcare organizations in the U.S. require a pathway to start adopting AI, as they often lack the necessary resources, personnel, or technical infrastructure.”
He pointed out that a successful model for funding these efforts already exists, referencing how Congress enabled widespread adoption of electronic health record systems 15 years ago through funding for technical assistance and infrastructure investments.
Dr. Ziad Obermeyer, a professor and researcher at the University of California, Berkeley, and an emergency physician, conveyed his belief in AI’s potential to enhance care while reducing costs. He and his team have developed an AI system capable of predicting sudden cardiac death risk based solely on a patient’s electrocardiogram waveform. Obermeyer asserted that their AI system outperforms existing prediction technologies significantly.
“This advancement means we could optimize the placement of defibrillators,” he explained. “We can ensure that high-risk patients receive the devices while reallocating them from low-risk individuals who wouldn’t benefit. This intersection of saving lives and reducing costs is a rare opportunity in healthcare, making AI a transformative force in our system.”
Despite his optimism, Obermeyer expressed concern that AI might do more harm than good if not approached carefully. He has researched AI in healthcare extensively and uncovered significant racial biases in algorithms used to inform care decisions. These biased decisions could affect as many as 150 million patients annually, he warned.
Obermeyer urged regulators and lawmakers to demand greater transparency regarding the outputs of AI algorithms. “When an algorithm forecasts healthcare costs, the developer should not be allowed to claim it predicts ‘health risks’ or ‘health needs.’ We also need to ensure accountability by measuring performance, particularly for protected groups, using independent datasets that are diverse enough to reflect the wider American population,” he stressed.
He proposed that government programs should be prepared to finance AI solutions that demonstrate value, suggesting that federal programs leverage their purchasing power to define clear criteria for what they will reimburse and how much.
The conversation around AI in healthcare is heating up on Capitol Hill, with lawmakers striving to navigate the complex intersection of innovation and patient protection. As federal agencies like CMS move to establish clearer guidelines for AI’s use in healthcare settings, there is a collective call for transparency, accountability, and responsible deployment of these powerful technologies. By fostering an environment where AI can thrive while ensuring safeguards against bias and discrimination, Congress can help shape a more equitable and efficient healthcare system for all Americans.