A live discussion between Michelle Mello, a leading health policy expert, and Neel Guha, a JD/PhD candidate in Computer Science, highlights the profound potential of AI in healthcare while emphasizing its legal, regulatory, and safety challenges. Hosted on the Stanford Legal podcast, the conversation explores the promises and pitfalls of AI in medicine, from diagnostic advancements to workflow streamlining, and addresses the pressing need for evolving laws and governance frameworks to balance innovation with patient safety.
The dialogue, moderated by Stanford Legal co-hosts Richard Thompson Ford and Pamela Karlan, highlights both the promises and the complexities of integrating AI into the healthcare system.
AI applications in healthcare are diverse, ranging from diagnostic tools to administrative assistance. For example, AI is widely used in radiology to enhance imaging accuracy. In the United States, nearly 90% of mammograms are now AI-assisted, a development that has consistently outperformed the accuracy of standalone radiologists.
AI is also being integrated into surgical devices, providing surgeons with advanced visualization tools that can predict and identify anatomical features beyond human capability. Some of these systems are even embedded in surgical robots, offering enhanced precision under human supervision. Beyond diagnostics, AI plays a significant role in reducing the administrative burdens faced by clinicians. Generative AI systems are being used to draft medical notes, respond to patient emails, and streamline billing processes—tasks that traditionally consume hours of a physician’s time, contributing to a burnout rate that now exceeds 40%.
Advertisement
Predictive analytics is another area where AI is making an impact. These systems move beyond traditional rule-based algorithms to analyze vast datasets, uncovering complex patterns that improve patient classification. AI can predict outcomes for surgical procedures or assess the likelihood of disease more effectively than previous methods. AI has also begun to transform health insurance processes.
Prior authorization—a notoriously time-consuming task for both physicians and patients—is increasingly managed through algorithms. Hospitals and insurers now engage in what could be described as “bot battles,” where AI systems on both sides expedite authorization processes, though human intervention is still necessary for disputes.
However, these advancements are not without significant risks. One major concern is the potential for automation bias, where clinicians over-rely on AI systems without critically reviewing their outputs. Although AI systems are right most of the time, this reliance can lead to harmful errors if vigilance wanes. For example, an AI-generated email draft meant for a patient may go unedited by a busy physician, resulting in misinformation or miscommunication. This reliance underscores a broader challenge: balancing the efficiency of AI with the need for human oversight.
You can’t both have a time-saving piece of generative AI and a piece of AI where the human actually spends time reviewing it. So the danger is what we call automation bias: that over time, people come either to trust the thing, or just to not care enough about whether it’s right. Because it’s right most of the time. And without that human vigilance, or another system in place to catch the errors of the first system, the fear is that there are going to be errors that cause harm.
Legal liability in AI-assisted healthcare is another critical area of concern. When AI systems cause harm, determining responsibility can be complex. Hospitals may face negligence claims if their AI systems fail to manage patient care effectively. Physicians, on the other hand, may encounter malpractice suits for relying on flawed AI recommendations.
Additionally, developers of AI tools can be subject to product liability claims if their systems are poorly designed or fail to disclose risks adequately. The ways AI makes mistakes—often systematic and predictable—are vastly different from human errors, which are typically random and influenced by external factors. In the worst-case scenarios, AI and human errors could exacerbate one another, resulting in significant harm.
Discrimination and transparency in AI decision-making add another layer of complexity. Modern AI systems, built on machine learning rather than traditional programming, are often opaque. This lack of transparency makes it difficult to trace how decisions are made, particularly when biases are involved. For example, AI tools used in healthcare might unintentionally discriminate against certain patient populations due to flawed training data, a problem that requires urgent attention.
Regulatory frameworks for AI in healthcare are also lagging behind technological advancements. Although the FDA regulates some AI systems as medical devices, the current framework—rooted in legislation from 1976—is ill-equipped to handle the rapid evolution of AI. Many AI tools are implemented without rigorous clinical testing, relying instead on observational studies labeled as “quality improvement.” This lack of pre-market testing is particularly concerning for vulnerable populations like children, where adult datasets are often misapplied due to a scarcity of pediatric data.
So, in the best case, a human and AI system are able to counteract each other. But in the worst case, they might actually feed off each other’s worst impulses. A human who’s quite good at catching AI mistakes, combined with an AI system, might perform substantially better than a human who is actually ignoring the AI system when it’s correct, and deferring to the AI system when it’s wrong.
Addressing these challenges requires proactive legal and policy changes. Hospitals must take greater accountability for vetting AI tools and negotiating favorable terms in licensing agreements with developers, who often include liability disclaimers in their contracts. Shifting more legal responsibility to hospitals could incentivize them to implement robust governance processes. Additionally, there is a pressing need to hold AI systems to the same clinical testing standards as drugs and medical devices to ensure their safety and efficacy.
Despite these hurdles, there is reason for optimism. AI holds immense potential to address longstanding issues in healthcare, such as missed or delayed diagnoses, which have profound impacts on patient outcomes. While the road ahead requires significant effort in regulation, liability allocation, and ethical oversight, Mello and Guha believe that the benefits of AI in healthcare make these challenges worth tackling.
This discussion serves as both a call to action and a reminder of the transformative power of AI when implemented responsibly. As AI continues to evolve, its integration into healthcare offers an unprecedented opportunity to enhance patient care, reduce inefficiencies, and improve outcomes—provided that the accompanying legal and regulatory frameworks keep pace with innovation.