State Spotlight: New York’s RAISE Act
NY’s RAISE Act (S6953B/A6453B) builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states.
The Bill at a Glance
Signed into law on December 19, 2025 by Governor Kathy Hochul, New York’s Responsible AI Safety and Education (RAISE) Act establishes one of the most comprehensive state-level frameworks governing the development and deployment of advanced, high-risk artificial intelligence systems. The law reflects New York’s intent to move beyond voluntary AI safety commitments and impose enforceable transparency, safety planning, and incident-reporting obligations on developers of “frontier” AI models operating at significant scale.
“By enacting the RAISE Act, New York is once again leading the nation in setting a strong and sensible standard for frontier AI safety...”
Rather than regulating AI broadly, the statute focuses on systems whose scale, general-purpose functionality, and deployment potential pose heightened risks to public safety, critical infrastructure, and societal stability. Most provisions take effect within months of enactment, with implementation staggered to allow developers time to establish compliance programs.
Scope
The RAISE Act applies to developers of advanced AI models that meet defined capability and scale thresholds, generally capturing systems trained with substantial computational resources and designed for broad, multi-domain use. By narrowing its focus to frontier systems, the law avoids sweeping regulation of narrow or low-risk AI tools while concentrating oversight on models most likely to generate systemic or downstream harm.
Covered Systems
Rather than regulating all AI applications broadly, the RAISE Act targets developers of highly capable or “frontier” AI models, generally understood to be systems trained with substantial computational resources and capable of broad, general-purpose use. This approach narrows the law’s applicability to a relatively small set of large AI developers, while avoiding blanket regulation of lower-risk or narrow AI tools. By focusing on scale and capability, New York aims to concentrate oversight where potential downstream harm is greatest.
Safety Framework and Transparency Requirements
Covered developers must establish and maintain a documented AI safety and risk-management framework governing the development, testing, and deployment of covered models. These frameworks must be made publicly available, subject to limited protections for trade secrets and sensitive security information.
The intent is to ensure that companies deploying advanced AI systems articulate how they assess and mitigate risks such as misuse, model failure, and unintended societal impacts, while allowing regulators and the public to evaluate whether meaningful safeguards are in place.
Incident Reporting and Disclosure Obligations
A central feature of the RAISE Act is its incident-reporting regime. Developers are required to notify the state within a short, defined timeframe after becoming aware of a serious AI-related incident that results in, or poses a substantial risk of, significant harm.
Compared to other state frameworks, New York’s reporting window is notably aggressive, signaling an expectation of rapid escalation and transparency when advanced AI systems malfunction or are exploited. These reports are intended to inform both enforcement and future policymaking.
Oversight and Enforcement
The law authorizes state oversight through designated executive agencies, with enforcement authority vested in the New York Attorney General. Civil penalties may be imposed for failure to comply with safety planning, transparency, or reporting obligations. Importantly, the Act stops short of granting regulators pre-deployment approval authority over AI systems, instead relying on post-hoc accountability mechanisms and disclosure-based oversight to shape developer behavior.
Key Changes from Earlier Proposals
During negotiations, the final enacted version of the RAISE Act was narrowed from earlier drafts that would have imposed more prescriptive controls on model release and deployment. The signed law reflects compromises intended to preserve innovation incentives while still establishing enforceable guardrails.
The agreed-upon chapter amendments requires large AI developers to create and publish information about their safety protocols, and report incidents to the State within 72 hours of determining that an incident occurred. It also creates an oversight office within the Department of Financial Services that will assess large frontier developers and enable greater transparency. The office will issue reports annually.
Requirements
At the core of the RAISE Act is a requirement that covered AI developers establish, maintain, and publicly disclose a written AI safety and risk management framework. This framework must describe the developer’s processes for identifying, assessing, and mitigating risks associated with the training, testing, and deployment of covered models, including risks related to misuse, unintended behavior, and large-scale harm. While developers may redact trade secrets and sensitive security details, the law emphasizes meaningful transparency rather than high-level marketing disclosures.
The Act also creates a robust incident-reporting regime. Developers must notify the state within a short, defined timeframe after discovering a serious AI-related incident that results in, or presents a credible risk of, significant harm. Compared to similar state laws, New York’s reporting window is notably compressed, signaling an expectation of rapid escalation and regulator visibility when advanced AI systems fail or are exploited.
Oversight authority is vested in designated state agencies, with enforcement power assigned to the New York Attorney General. The Attorney General may seek civil penalties, injunctive relief, and other remedies for failure to comply with safety planning, disclosure, or reporting obligations. Importantly, the law does not create a private right of action, instead centralizing enforcement at the state level.
Why It Matters
The RAISE Act cements New York’s role as a national leader in state-level AI governance and adds significant weight to the growing patchwork of AI obligations facing large developers. By focusing on frontier models rather than specific use cases, the law complements California’s compute- and capability-based approach while diverging from states that regulate AI primarily through consumer protection or sector-specific lenses.
For AI developers, the Act elevates safety frameworks and incident response from internal best practices to enforceable legal requirements. Companies operating nationally must now design AI governance programs that can withstand scrutiny not only from federal agencies, but also from assertive state regulators with independent enforcement authority. The law further strengthens calls for a unified federal AI standard to reduce fragmentation and compliance uncertainty.
Legislogiq’s Insight
New York’s RAISE Act underscores a growing structural conflict in U.S. AI governance: the accelerating pace of state-level regulation versus the Trump Administration’s push for federal primacy and preemption in AI policy. President Trump’s latest AI-related executive order frames AI governance as a matter of national economic and security strategy, emphasizing uniform federal standards and warning against a patchwork of state laws that impose divergent compliance obligations on developers.
“The RAISE Act shows that executive action alone cannot stop state AI regulation. Without a clear federal standard from Congress, developers will continue to face overlapping state and federal compliance regimes.”
The RAISE Act directly illustrates the limits of that approach: in the absence of congressional action, states are asserting their traditional police powers to regulate technologies that pose localized public safety and consumer protection risks.
This divergence sets up an increasingly consequential preemption debate. Until Congress establishes a durable federal AI framework with clear preemptive effect, state laws like New York’s RAISE Act will continue to define practical compliance expectations for advanced AI developers.
For industry, this dynamic heightens regulatory uncertainty and reinforces the need for scalable governance programs capable of meeting both evolving state mandates and future federal standards.