It feels like every week we hear about a new, more powerful AI model that can write, code, or create images better than ever before. This rapid progress is exciting, but it also brings up important questions about safety and accountability. Who is responsible when these powerful tools go wrong? New York is stepping up to answer that question with a groundbreaking piece of legislation.
The New York legislature has just passed the Responsible AI Safety and Education Act, or the 'RAISE Act' for short. This bill, now awaiting Governor Kathy Hochul's signature, sets a new precedent for how the most powerful AI models are managed and deployed.
Who Does the RAISE Act Target?
The law doesn't apply to every startup tinkering with AI. It specifically targets the major players, which it calls “large developers.” A company falls into this category if it has trained at least one “frontier model” and has spent over $100 million in computing power to do so.
A “frontier model” is defined as a top-tier AI, either one trained using an immense amount of computational power (over 10^26 operations) or a slightly smaller model that was trained using a technique called “knowledge distillation” from a larger one, with a compute cost exceeding $5 million.
The Core Rule: Preventing 'Critical Harm'
The central pillar of the RAISE Act is a clear prohibition: large developers cannot deploy a frontier model if it creates an unreasonable risk of “critical harm.” The definition of critical harm is specific and severe:
- The death or serious injury of 100 or more people.
- At least $1 billion in damage to property or financial rights.
- The creation or use of a chemical, biological, radiological, or nuclear (CBRN) weapon.
- An AI model acting on its own (without meaningful human intervention) to commit a serious crime.
New Obligations for Big Tech
To ensure compliance, the RAISE Act imposes several key duties on large developers before they can release their models to the public:
- Implement Safety Protocols: Developers must create and maintain a detailed written safety and security protocol. This isn't just a document to be filed away; it must be kept for five years after the model is no longer deployed.
- Public and Government Disclosure: A redacted version of this safety plan must be published publicly and sent to the New York Attorney General (AG) and the Division of Homeland Security and Emergency Services (DHS). The AG can also request the full, unredacted version.
- Transparent Testing: Records of all safety tests and their results must be kept, with enough detail for a third party to replicate the tests. This promotes transparency and accountability.
- Mandatory Safeguards: Developers must implement concrete safeguards to prevent the model from causing critical harm.
Ongoing Oversight and Incident Reporting
The work doesn't stop once a model is deployed. The RAISE Act requires large developers to conduct an annual review of their safety protocols to keep up with the model's evolving capabilities and industry best practices.
Furthermore, if a “safety incident” occurs, developers have just 72 hours to report it to the AG and DHS. A safety incident includes not only actual critical harm but also events that signal an increased risk of it, such as:
- The model acting autonomously without a user's request.
- Theft or unauthorized access to the model's core programming (its 'weights').
- A critical failure of safety controls.
Once signed into law, the RAISE Act will take effect after 90 days, marking a significant step toward ensuring that as AI becomes more powerful, it also becomes safer and more responsible.
Key Takeaways:
- Targeted Regulation: The RAISE Act focuses on the largest AI developers with the most powerful models.
- Focus on Severe Risks: The law is designed to prevent catastrophic outcomes, defined as 'critical harm.'
- Transparency is Key: Developers must document and disclose their safety protocols and testing procedures.
- Rapid Reporting: A 72-hour deadline for reporting safety incidents ensures swift government oversight.
- A New Precedent: New York's law could serve as a model for other states and even federal legislation on AI safety.