New York State is now the first U.S. state to require employers to disclose whether mass layoffs are due to AI. As of March 2025, a new checkbox has been added to the state’s WARN (Worker Adjustment and Retraining Notification) form. Employers planning mass layoffs must now indicate if the reductions are due to “technological innovation or automation,” and specify whether AI was involved.
Though seemingly minor, this checkbox could be the first step toward much-needed transparency in how automation, and AI in particular, is reshaping the labor market. The implications are profound for the healthcare industry, where both automation and workforce shortages are intensifying.
Why This Policy Matters
The new checkbox applies to businesses with at least 50 employees initiating layoffs that affect 25 or more workers, or at least one-third of the workforce. While it doesn’t yet carry penalties for non-disclosure or false reporting, it sends a clear signal: regulators are watching how AI impacts jobs, and want data.
As Governor Kathy Hochul noted in her 2024 State of the State address, “we can’t manage what we don’t measure.” In that spirit, this policy aims to help workers, lawmakers, and the public better understand who is being displaced and why.
Healthcare: A Sector in Transition
AI is rapidly transforming healthcare workflows, from diagnostics and documentation to scheduling, supply chain, and patient communication. And yet, unlike in tech or manufacturing, where AI-driven layoffs have made headlines, healthcare’s transition has been quieter, perhaps too quiet.
According to a 2024 McKinsey report:
- Up to 25% of healthcare tasks could be automated by 2030.
- 42% of U.S. hospital administrators report actively exploring AI solutions to increase efficiency.
In some systems, automation has already replaced staff in areas like billing, triage, and intake—but without clearly stating that AI was the catalyst. New York’s policy introduces a mechanism to make those transformations visible.
What It Means for Healthcare Leaders
This development comes at a pivotal time. Many health systems are caught between conflicting forces:
- A growing mandate to cut costs and optimize staffing
- Escalating burnout and retention issues
- A surge in AI tools promising speed and accuracy
New York’s checkbox could evolve into a model for AI labor transparency nationwide. Healthcare leaders may want to get ahead by:
- Tracking and documenting the impact of AI on staffing, not just tasks
- Creating internal protocols to flag when automation replaces (vs. supports) roles
- Collaborating with workforce boards to ensure retraining or reassignment pathways exist
Who Gets Left Behind?
One critical concern is equity. If AI deployment leads to job losses, especially among frontline, administrative, or lower-wage roles, there’s a risk of widening existing healthcare inequities.
Some questions we must begin asking:
- Will these layoffs disproportionately affect women and workers of color?
- Will hospitals report automation honestly if there are no penalties?
- Can we create pathways to retraining that aren’t just symbolic?
Transparency, starting with a checkbox, helps us begin to answer those questions.
Broader Implications
New York has already led in regulating AI in hiring (with its 2023 law requiring bias audits for automated hiring tools). Now it’s expanding that leadership to labor transparency.
Other states are watching. California and Illinois are reportedly exploring similar disclosures. And within healthcare, the HHS could eventually consider guidance or funding pilots to assess how automation is transforming the care workforce.
This move could influence future federal rulemaking by the Department of Labor or even prompt updates to the federal WARN Act itself. Labor unions, professional associations, and AI ethics watchdogs may also seize this moment to demand clearer protections and retraining mandates for affected workers.
Moreover, this transparency tool sets a precedent for other sectors—pharmaceuticals, insurance, and elder care, where AI adoption is accelerating but oversight lags behind. Data gathered through these disclosures can serve as an early warning system, helping policymakers spot systemic disruptions before they hit crisis levels.
Whether this checkbox becomes a national standard remains to be seen, but it has already shifted the conversation. It reframes AI not only as a tool for innovation but as a force shaping employment, equity, and ethical responsibility.
New York’s new WARN form doesn’t prevent AI layoffs. It doesn’t penalize them. But it asks a vital question: Should the public know when AI changes who works and who doesn’t?
For healthcare, where the stakes include not just livelihoods but lives, transparency isn’t just good policy. It’s a moral imperative.