Unlike many countries that are implementing cohesive regulations for artificial intelligence (AI), the United States has chosen a more disjointed approach. A variety of laws have surfaced across state, federal, and local levels, often targeting AI applications in specific industries such as human resources and insurance. Recently, there has been a marked shift towards establishing regulations for AI in the healthcare sector, with an emphasis on creating guidelines that address specific use cases.
Historically, healthcare has been a heavily regulated industry. However, authorities have recognized that AI presents unique challenges that necessitate fresh considerations and policies.
Proposed AI Regulations in Healthcare
Most of the legislative efforts targeting AI in healthcare in the U.S. are currently in the proposal phase at both the federal and state levels. Several proposed laws have the potential to impact national healthcare organizations significantly. Moreover, when AI-related laws fail to pass, they often reappear in modified forms. Thus, it’s highly probable that some elements of the following proposals will eventually become statutory requirements.
Federal Proposals
Currently, there are three key federal proposals aimed at regulating AI in healthcare:
- The Better Mental Health Care for Americans Act (S293): Introduced to the Senate on March 22, 2023, this bill seeks to modify payment programs under Medicare, Medicaid, and the Children’s Health Insurance Program. It mandates that Medicare Advantage (MA) organizations using nonquantitative treatment limitations for mental health or substance use disorder benefits conduct a comparative analysis of these limits, including the role of AI and machine learning in clinical decision-making. Furthermore, it requires a new clause in the Social Security Act for documenting AI-related denials.
- The Health Technology Act of 2023 (H.R.206): Introduced on January 9, 2023, this legislation aims to recognize AI and machine learning technologies as potential prescribers of medication. Under this act, such technologies may be classified as prescribing practitioners if they comply with state laws and federal regulations for medical devices.
- The Pandemic and All-Hazards Preparedness and Response Act (S2333): Introduced in July 2023, this act seeks to reauthorize various programs under the Public Health Service Act. It requires the Secretary of Health and Human Services to conduct a study within 45 days of enactment to identify vulnerabilities related to AI use, including the risks posed by large language models. A report detailing findings and proposed actions must be submitted within two years.
State-Level Proposals
At the state level, legislative efforts have been concentrated mainly in the Eastern United States:
- Massachusetts has introduced a bill known as “An Act Regulating the Use of Artificial Intelligence in Providing Mental Health Services” (H1974). This legislation aims to ensure the safety and welfare of individuals seeking mental health treatment. It mandates that licensed mental health professionals using AI for treatment obtain approval from their licensing board, inform patients of AI use, and offer alternatives to human treatment. Furthermore, AI systems must prioritize patient safety and effectiveness, with ongoing monitoring by the professionals.
- Illinois has proposed the Safe Patients Limit Act (SB2795), initially introduced in 2023 and reintroduced in January 2024. This act seeks to limit the number of patients assigned to registered nurses in certain contexts while placing restrictions on AI usage. It prohibits healthcare facilities from allowing AI recommendations to replace independent nursing judgment.
- Georgia introduced legislation (HB887) in January 2024 that amends existing laws to restrict AI-driven decision-making in insurance and healthcare. It mandates that insurance coverage decisions cannot be made solely based on AI or automated tools, requiring meaningful human review and the ability to override decisions as necessary. Similar requirements apply to healthcare decisions and public assistance cases.
Existing Healthtech Regulations
While many AI regulations are still pending, Virginia has enacted a law (HB2154) that modifies the Code of Virginia concerning hospitals and nursing facilities, effective March 18, 2021. This law mandates that these institutions establish policies governing the permissible access to and use of intelligent personal assistants that employ AI for basic tasks.
WHO Guidelines on Large Multi-Modal Models in Healthcare
In addition to U.S. regulations, the World Health Organization (WHO) released guidelines on the ethical and governance aspects of large multi-modal models (LMMs) on January 19, 2024. These guidelines address generative AI models that can process multiple input types to produce diverse outputs. Although these models hold promise for healthcare applications, the WHO has identified several associated risks, including inaccuracies, bias, privacy issues, diminished patient-provider interaction, and accountability challenges.
The WHO emphasizes the need to mitigate these risks through specific actions across three phases: development, provision, and deployment. Suggested actions include developer certification, regulatory assessments for healthcare use, and independent audits to ensure the responsible application of LMMs.
The regulatory landscape for artificial intelligence in healthcare is rapidly evolving, reflecting the need for a careful balance between innovation and patient safety. As the U.S. takes a fragmented approach to AI legislation, it’s crucial for stakeholders—including healthcare providers, technology developers, and regulatory bodies—to stay informed and engaged with the ongoing discussions surrounding these laws. The proposed regulations, whether at the federal or state level, signal a recognition of the unique challenges AI poses in clinical settings and the necessity for specific guidelines tailored to its application.
As AI continues to revolutionize healthcare, it offers immense potential to enhance patient outcomes, streamline operations, and drive efficiency. However, harnessing this potential requires a commitment to compliance and ethical practices. By actively participating in the shaping of AI regulations and prioritizing transparency and accountability, healthcare professionals can not only ensure compliance but also foster trust among patients and the public.
As we move forward, it’s essential to remain vigilant and adaptive, as the regulatory environment will likely continue to shift in response to technological advancements and societal needs. For those navigating this dynamic landscape, understanding and implementing these regulations will be vital to successfully integrating AI into healthcare practices. Embracing this change not only helps in meeting legal requirements but also paves the way for a more innovative, equitable, and patient-centered healthcare system.
By keeping abreast of the latest developments and advocating for responsible AI use, we can collectively shape a future where technology enhances the healthcare experience while safeguarding the rights and well-being of all individuals.