ENACTED
In December 2021, New York City passed the first law (Local Law 144), in the United States requiring employers to conduct bias audits of AI-enabled tools used for employment decisions. The law imposes notice and reporting obligations.
Specifically, employers who utilize automated employment decision tools (AEDTs) must:
Subject AEDTs to a bias audit, conducted by an independent auditor, within one year of their use;
Ensure that the date of the most recent bias audit and a “summary of the results”, along with the distribution date of the AEDT, are publicly available on the career or jobs section of the employer’s or employee agency’s website;
Provide each resident of NYC who has applied for a position (internal or external) with a notice that discloses that their application will be subject to an automated tool, identifies the specific job qualifications and characteristics that the tool will use in making its assessment, and informs candidates of their right to request an alternative selection process or accommodation (the notice shall be issued on an individual basis at least 10 business days before the use of a tool); and
Allow candidates or employees to request alternative evaluation processes as an accommodation.
While enforcement of the law has been delayed multiple times pending finalization of the law’s implementing rules, on April 6, 2023 the Department of Consumer and Worker Protection (DCWP) published the law’s Final Rule. The law is now in effect, and enforcement began on July 5, 2023.
Failed
Introduced on November 3, 2023, S7735 (assembly version A7906), provides that it shall be unlawful for a landlord to implement or use an automated decision tool, unless it: (1) no less than annually, conducts a disparate impact analysis to assess the actual impact of any automated decision tool and publicly files the assessment; and (2) notifies all applicants than an automated decision tool will be used and provides the applicant with certain disclosures related to the automated decision tool. If passed, the law will go into immediate effect.
Failed
Introduced on July 7, 2023, S7592 (assembly version A7904), would amend election law to require that any political communication, that uses an image or video footage that was generated in whole or in part with the use of artificial intelligence, disclose that artificial intelligence was used in such communication.
Failed
Introduced on October 13, 2023, 9A8129 (senate version S8209), would create the New York Artificial Intelligence Bill of Rights. Where a New York resident is affected by any system making decisions without human intervention, under the AI Bill of Rights they would be afforded the following rights and protections: (i) the right to safe and effective systems; (ii) protections against algorithmic discrimination; (iii) protections against abusive data practices; (iv) the right to have agency over one’s data; (v) the right to know when an automated system is being used; (vi) the right to understand how and why an automated system contributed to outcomes that impact one; (vii) the right to opt out of an automated system; and (viii) the right to work with a human in the place of an automated system.
Failed
Introduced on September 29, 2023, A8098 (Senate version S7922) would require publishers of books created wholly or partially with the use of generative artificial intelligence to disclose such use of generative artificial intelligence before the completion of such sale; applies to all printed and digital books consisting of text, pictures, audio, puzzles, games or any combination thereof.
Failed
Introduced on October 16, 2023, A8158 (senate version S7847), requires that every newspaper, magazine or other publication printed or electronically published in this state, which contains the use of generative artificial intelligence or other information communication technology, identify that certain parts of such newspaper, magazine, or publication were composed through the use of artificial intelligence or other information communication technology.
Failed
Introduced on January 12, 2024, S8214, requires the registration with the Department of State of certain companies whose (i) primary business purpose is related to artificial intelligence as evidenced by their North American Industry Classification System (NAICS) Code of 541512, 334220, or 511210, and (ii) who reside in New York or sell their products or services in New York. The fee for registration is $200. Failure to register can result in a fine of up to ten
thousand dollars. Companies that knowingly fail to register may be barred from operating or selling their AI products or services in the state for a period of up to ten years.
Failed
Introduced on October 27, 2023, A8195, the Advanced Artificial Intelligence Licensing Act, requires the registration and licensing of high-risk advanced artificial intelligence systems, establishes the advanced artificial intelligence ethical code of conduct, and prohibits the development and operation of certain artificial intelligence systems.
Failed
Introduced on January 12, 2024, S8206 (assembly version A8105), requires that every operator of a generative or surveillance advanced artificial intelligence system that is accessible to residents of the state require a user to create an account prior to utilizing such service. Prior to each user creating an account, such operator must present the user with a conspicuous digital or physical document that the user must affirm under penalty of perjury prior to the creation or continued use of such account. Such document shall state the following:
“I, ________ RESIDING AT ________, DO AFFIRM UNDER PENALTY OF PERJURY THAT I HAVE NOT USED, AM NOT USING, DO NOT INTEND TO USE, AND WILL NOT USE THE SERVICES PROVIDED BY THIS ADVANCED ARTIFICIAL INTELLIGENCE SYSTEM IN A MANNER THAT VIOLATED OR VIOLATES ANY OF THE FOLLOWING AFFIRMATIONS:
I WILL NOT USE THE PLATFORM TO CREATE OR DISSEMINATE CONTENT THAT CAN FORESEEABLY CAUSE INJURY TO ANOTHER IN VIOLATION OF APPLICABLE LAWS;
I WILL NOT USE THE PLATFORM TO AID, ENCOURAGE, OR IN ANY WAY PROMOTE ANY FORM OF ILLEGAL ACTIVITY IN VIOLATION OF APPLICABLE LAWS;
I WILL NOT USE THE PLATFORM TO DISSEMINATE CONTENT THAT IS DEFAMATORY, OFFENSIVE, HARASSING, VIOLENT, DISCRIMINATORY, OR OTHERWISE HARMFUL IN VIOLATION OF APPLICABLE LAWS;
I WILL NOT USE THE PLATFORM TO CREATE AND DISSEMINATE CONTENT RELATED TO AN INDIVIDUAL, GROUP OF INDIVIDUALS, ORGANIZATION, OR CURRENT, PAST, OR FUTURE EVENTS THAT ARE OF THE PUBLIC INTEREST WHICH I KNOW TO BE FALSE AND WHICH I INTEND TO USE FOR THE PURPOSE OF MISLEADING THE PUBLIC OR CAUSING PANIC.”
Proposed
Introduced on August 4, 2023, S7623 (reprinted as S7623C on May 31, 2024) (assembly version A9315), would impose statewide requirements regulating tools that incorporate artificial intelligence to assist in employee monitoring and the employment decision-making process. In particular, the bill (1) defines a narrow set of allowable purposes for the use of electronic monitoring tools (EMTs), (2) requires that the EMT be “strictly necessary” and the “least invasive means” of accomplishing those goals, and (3) requires that the EMT collect as little data as possible on as few employees as possible to accomplish the goal. The bill also requires that employers exercise “meaningful human oversight” of the decisions of automated tools, and conduct and publicly post the results of an independent bias audit, and provide notification requirements to candidates that a tool is in use.
Failed
Introduced on January 4, 2023, SB 365, the New York Privacy Act, would be the state’s first comprehensive privacy law. The law would require companies to disclose their use of automated decision-making that could have a “materially detrimental effect” on consumers, such as a denial of financial services, housing, public accommodation, health care services, insurance, or access to basic necessities; or could produce legal or similarly significant effects. Companies must provide a mechanism for a consumer to formally contest a negative automated decision and obtain a human review of the decision, and must conduct an annual impact assessment of their automated decision-making practices to avoid bias, discrimination, unfairness or inaccuracies.
The law would also permit consumers to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” Profiling is defined as any type of automated processing performed on personal data to evaluate, analyze, or predict personal aspects” such as “economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” Finally, the law would mandate that companies conduct a data protection assessment on their profiling activities, since profiling would be considered a processing activity with a heightened risk of harm to the consumer.
Failed
Introduced on January 4, 2023, A216, would require advertisements to disclose the use of synthetic media. Synthetic media is defined as “a computer-generated voice, photograph, image, or likeness created or modified through the use of artificial intelligence and intended to produce or reproduce a human voice, photograph, image, or likeness, or a video created or modified through an artificial intelligence algorithm that is created to produce or reproduce a human likeness.” Violators would be subject to a $1,000 civil penalty for a first violation and a $5,000 penalty for any subsequent violation.
Failed
Introduced on March 7, 2023, A5309, would amend state finance law to require that where state units purchase a product or service that is or contains an algorithmic decision system, that such product or service adheres to responsible artificial intelligence standards. The bill requires the commissioner of taxation and finance to adopt regulations in support of the law.
Failed
Introduced on March 10, 2023, SB 5641A (Assembly version A567), would amend labor law to establish criteria for the use of automated employment decision tools (AEDTs). The proposed bills mirrors NYC’s Local Law 144 in many ways. In particular, employers who utilize AEDTs must: (1) obtain from the seller of the AEDT a disparate impact analysis, not less than annually; (2) ensure that the date of the most recent disparate impact analysis and a summary of the results, along with the distribution date of the AEDT, are publicly available on the employer’s or employee agency’s website prior to the implementation or use of such tool; and (3) annually provide the labor department a summary of the most recent disparate impact analysis.
Failed
Introduced on May 3, 2023 and May 10, 2023, S6638 and A7106, the Political Artificial Intelligence Disclaimer (PAID) Act, would amend election and legislative law in relation to the use and disclosure of synthetic media. The act would add a subdivision to the election law that requires any political communication which was produced by synthetic media to be disclosed via printed or digital communications. The disclosure must read “This political communication was created with the assistance of artificial intelligence.” If passed, the act would take effect on January 1, 2024.
Proposed
S9609, introduced May 16, 2024, would make it unlawful for a rental property owner, or any agent or subcontractor thereof, to collect information on historical or contemporaneous prices, supply levels, or contract information as well as renewal dates using a system, software, process made by an algorithm. “Rental property owner” includes individuals as well as business entities. The rental property owner cannot exchange for value the services of a coordinator, which any person that operates software or data analytics services.
Proposed
S9542, introduced May 16, 2024, would amend general business law by prohibiting the publication of a “digital or physical newspaper, magazine, or periodical which was wholly or partially produced or edited through the use of artificial intelligence without significant human oversight.” AI includes the “use of machine learning technology, software, automation, and algorithms to perform tasks, to make rules and/or predictions based on existing data sets and instructions.”
Proposed
S9450 (Assembly version A10103), introduced May 15, 2024, would amend general business law to require an owner, licensee, or operator of “generative artificial intelligence” to “conspicuously” disclose a warning on the user’s interface that would inform the user that the outputs may be inaccurate and/or inappropriate. If an entity fails to do this, then they must pay a civil penalty of $25 per user of such system or $100,000.
Proposed
S9434 (Assembly version A9472), introduced May 15, 2024, would prohibit landlords from using an algorithmic device to set the amount of a residential tenant’s rent. “Algorithmic device” includes “a device that uses one or more algorithms to perform calculations of data, including data concerning local or statewide rent amounts being charged to tenants by landlords.” This also would include a product that incorporates an algorithmic device. A violation would result in monetary penalty.
Proposed
S9401, introduced May 15, 2024, would amend the labor law to prohibit an employer from using or applying an AI unless the employer has conducted an impact assessment for the AI’s impact and use. This assessment should be done at least once every 2 years and before any material change to the AI. The impact assessment must include these requirements: a description of the AI’s objectives; an evaluation of the ability of the AI to achieve its objectives; a summary of the underlying AI tools being used; the design and training data to develop the AI process; the extent the AI requires input of sensitive and personal data, how that data is used and stored, and any control users may have over this data; an estimated number of employees who have already been displaced by AI; and an estimated number of employees expected to be displaced by AI. “Employer” includes a business resided in New York, is not a small business, and employs more than 100 people.
Proposed
S9381 (Assembly version A10494), introduced May 14, 2024, would amend the general business law to add liability to proprietors for chatbot responses. “Proprietors” includes any person or business entity with more than 20 employees that owns, operates, or deploys a chatbot system that interacts with users. This would not include third-party developers that license their chatbot technology to the proprietor. “Chatbot” is an AI system, software program, or technological application that creates “human-like conversation and interaction through text messages, voice commands, or a combination thereof to provide information and services to users.” The proprietor is responsible for “ensuring such chatbot accurately provides information aligned with the formal policies, product details, disclosures and terms of service offered to users.” This liability cannot be waived through disclosure to users. Additionally, proprietors would have to provide “clear, conspicuous, and explicit notice to users that they are interacting” with AI, rather than a human representative.
Proposed
S8755, introduced March 7, 2024, establishes the New York artificial intelligence ethics commission, which would promulgate rules regulating AI use by business entities as well as other regulations. This bill also specifies that no entity doing business in New York shall use AI systems that discriminate based on race, gender, sexuality, disability, or other protected characteristics; create or disseminate false or misleading information created by AI to deceive the public; participate in the unlawful collection, processing, or dissemination of personal information by an AI system without consent; participate in the unauthorized use or reproduction of IP through AI; fail to have safeguards to prevent harm or material loss through AI; conduct AI research that is harmful or without the subjects’ consent; intentionally disrupt, damage, or subvert of an AI system to undermine its integrity or performance; or participate in the unauthorized use of a person’s personal identity or data by AI to commit fraud or theft. The commission can impose penalties for any violation. This act would take effect immediately.
Proposed
S7592 (Assembly version A79094), introduced July 7, 2023, and amended February 26, 2024, would require political communications to contain disclosures regarding the use of AI to make that communication. “Political communication” includes “an image or video footage that was generated in whole or in part with the use of artificial intelligence.” Failure to comply would result in a fine equal to the amount expended on the communication.
Proposed
S6685 (Assembly version A843), introduced May 4, 2023, would prohibit motor vehicle insurers from using AI-generated algorithms used to construct coverage terms, premiums and rates, and actuarial tables that can discriminate based on age, marital status, sex, sexual orientation, educational background or education level attained, employment status or occupation, wealth, consumer credit information, ownership or interest in real property, and other characteristics.
Proposed
S2477 (Assembly version A5631), introduced January 20, 2023, and amended recently on April 15, 2024, would revise the New York State Fashion Workers Act to require model management companies to obtain “clear written consent for the creation or use of a model’s digital replica, detailing the scope, purpose, rate of pay, and duration of such use.” The bill would prohibit model management companies from creating, altering, or manipulating a model’s digital replica using AI without written consent from the model. “Digital replica” is a “significant, computer-generated or artificial intelligence-enhanced representation of a model’s likeness.”
Proposed
S2277 (Assembly version A3308), introduced January 19, 2023, and recently amended, would require business entities in New York that have personal information of at least 500 individuals to give notice about the entity’s use of the personal information. The bill also would create anti-discrimination practices for the entity to follow regarding the use of the AI.
Proposed
A10374 (Senate version S9439), introduced May 21, 2024, would amend the general business law to prohibit robots and uncrewed aircraft equipped or mounted with weapons. “Robotic device” is a “mechanical device capable of locomotion, navigation, or movement on the group and that operates at a distance from its operator or supervisor, based on comments or in response to sensor data, artificial intelligence, or a combination.” The bill would make it unlawful for any person to use a robotic device or uncrewed aircraft to commit the crime of menacing; criminally harass another person; or use the device to physically restrain or attempt to restrain a human being. A knowing violation of this law would result in a civil penalty. This bill would not apply to a defense industrial company if the company were acting within their contract with the U.S. Dept. of Defense; a manufacturer or developer who modifies or operates these devices for the purpose of developing technology intended to detect the unauthorized weaponization of a robotic device or uncrewed aircraft; or government officials acting within the scope of their duties.
Proposed
A9149, introduced February 8, 2024, and referred to the Assembly Insurance Committee, would amend insurance law to require insurers to notify insureds about the use or lack of use of AI-based algorithms to review. This bill would broadly apply to insurers who are authorized to write accident and health insurance in New York, clinical peer reviewers who participate in a utilization review process for insurers, a corporate organized under New York, and health maintenance organizations. The department should certify these AI-based algorithms and trainings being used have minimized the risk of bias regarding a “covered person’s race, color, religious creed, ancestry, age, sex, gender, national origin, handicap or disability” and should “adhere to evidence-based clinical guidelines.” In addition, the bill would require documentation of “the utilization review of the individual clinical records or data prior to issuing an adverse determination.” A violation can result in a license suspension or revocation; refusal, for a maximum of 1 year, to issue a new license; a maximum fine of $5,000 per violation; or a maximum fine of $10,000 for each willful violation.
Proposed
A9103, introduced February 7, 2024, and referred to the Assembly Election Law Committee, would amend election law to include a notification requirement. The bill would require “any political communication made by phone call, email, or other message-based communication” that uses AI to create a human-like conversation to reasonably inform the person that they are communicating with AI. If passed, this bill would take effect immediately.
Proposed
A9054, introduced February 5, 2024, and referred to the Assembly Election Law Committee, would amend election law to prohibit entities from using generative AI in whole or in part to create a political communication that contains “any realistic photo, video, or audio depiction of a candidate, or person interacting with a candidate.” AI includes “any technology that engages in its own learning and decision-making to generate new data.” If passed, this bill would take effect immediately.
Proposed
Introduced February 5, 2024, and referred to the Assembly Election Law Committee, A9028 would amend election law to, as is relevant, require disclosure of any political communication covered by the bill and made by AI or artificial media. The bill would apply to printed or digital political communications, including “brochures, flyers, posters, mailings, electronic mailings, or internet advertising.” The disclosure must state the communication was “created by or with the assistance of artificial intelligence.” The disclosure must be readable, clear, and conspicuous. If a person has an intent to damage a candidate or deceive with the political communication, then a violation can amount to a criminal charge.
Proposed
A8369, introduced December 13, 2023, would amend insurance law to prohibit insurers from essentially using AI, an algorithm, or predictive model that incorporates external consumer data and information sources in a way to “unfairly discriminate” on the basis of “race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.” The bill includes certain requirements that the insurer must follow, such as providing information to the superintendent, in order to avoid unfairly discriminating against people. “External consumer data and information source” includes data used by an insurer to establish lifestyle indicators in “marketing, underwriting, pricing, utilization management, reimbursement methodologies, and claims management” practices.
Proposed
A8195, introduced October 27, 2023, and referred to the Assembly Science and Technology Committee, would, amongst a variety of things, establish an AI ethical code of conduct as well as require registration and licensing of “high-risk advanced artificial intelligence systems.” “High-risk” advanced AI system is a system that “possesses capabilities that can cause signifi9cant harm to the liberty, emotional, psychological, financial, physical, or privacy interests of an individual or groups of individuals, or which have significant implications on governance, infrastructure, or the environment.” This bill would apply to operators who distribute and have control over the development of a high-risk AI system.
Proposed
A8179, introduced October 27, 2023, and referred to the Ways and Means Committee, would tax certain corporations that have displaced people from their employment because of AI technologies, including machinery, AI algorithms, or computer applications. This bill would apply to corporations doing business in New York that have met specified requirements, such as having less than one million dollars but at least ten thousand dollars of receipts in New York. This act would take effect immediately upon enactment and apply to the next taxable year.
Proposed
A7859, introduced July 7, 2023, and referred to the Labor Committee, would amend labor law to require an employer or employment agency using an “automated employment decision tool to screen candidate who have applied for a position” to notify each candidate that this tool has been used to assess or evaluate the candidate, the job qualification and characteristics the tool uses, and information about the type of data the tool collects. “Automated employment decision tool” is any computation process that uses “machine learning, statistical modeling, data analytics, or artificial intelligence” to substantially assist or replace discretionary decision making for employment decisions. This bill would take effect on January 1 following enactment.
Proposed
Introduced February 3, 2023, and referred to the Consumer Affairs and Protection Committee, A3593 would amend general business law to require companies to follow a host of guidelines centered around protecting consumer privacy. In regard to AI, the bill would apply to a “controller” or “the person who, alone or jointly with others, determines the purposes and means of the processing of personal data.” This bill defines AI as an “automated decision-making” process derived from machine learning, AI, or an automated process involving personal data resulting in a decision affecting consumers. If a “controller makes an automated decision involving solely automated processing that materially contributes to a denial of financial or lending services, housing, public accommodation, insurance, health care services, or access to basic needs,” the controlled would need to (1) disclose an automated process made the decision; (2) provide an avenue for consumers to appeal the decision; and (3) explain the process to appeal the decision. In addition, a controller or processor engaged in this automated decision-making must annually do an “impact assessment” describing the automated decision-making process and assess if the process produces any discriminatory results. An independent auditor must assess the impact assessment results. This bill would take effect immediately.
Proposed
A9314, introduced February 24, 2024, and referred to the Labor Committee, would create criteria for the use of an “automated employment decision tool.” This is a system “used to filer employment candidates or prospective candidate for hire in a way that establishes a [referred candidate or candidate without relying on candidate-specific assessments by individual decision-makers.” This includes personality tests, cognitive ability tests, resume scoring systems, and other systems governed by statistical theory or specified methodologies. “Automated employment decision tool” does not include a tool that “does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons.” The guidelines this bill would create are conducting a disparate impact analysis to assess the impact of the employer’s use of an automated employment decision tool, writing a summary of the most recent disparate impact analysis, and providing to the department this summary. This act would take effect immediately.
Failed
S7422, introduced on May 24, 2023 and A7634, introduced on May 25, 2023, would prohibit film production companies who apply for Empire State film production credit from using synthetic media in any component of production that would displace a natural person from that role. This includes any form of media, such as text, image, video, or sound that is created or modified by use of artificial intelligence. Compliance with this act would be a condition for granting of the credit. If passed, the act would take effect immediately.