New York is making headlines as it moves closer to enacting the Responsible AI Safety and Education Act (RAISE Act), a bill that could set a new national standard for how advanced artificial intelligence systems are governed. If signed into law, the RAISE Act would make New York the first state in the U.S. to require legally mandated transparency and safety protocols for the most powerful AI models.
Why the RAISE Act Matters
The RAISE Act is a direct response to growing concerns about the risks posed by frontier AI systems—those with the potential to cause significant harm if misused or left unchecked. Unlike previous attempts in other states, such as California’s SB 1047, New York’s approach is more targeted and pragmatic, focusing only on the largest players in the AI industry. This means startups, smaller companies, and academic researchers can continue to innovate without being burdened by heavy compliance requirements.
Who Is Affected?
The Act applies to companies whose AI models meet two criteria:
- They were trained using more than $100 million in computing resources.
- Their models are available to New York residents.
By setting a high bar for applicability, the RAISE Act ensures that only the most influential and potentially risky AI systems are subject to regulation, while protecting the state’s vibrant startup and research communities.
What Does the RAISE Act Require?
For those companies that fall under its scope, the RAISE Act introduces four main obligations:
- Safety and Security Protocols: Companies must publish detailed safety and security protocols, including risk evaluations for severe threats like the creation of biological weapons or automated criminal activity.
- Incident Reporting: Any safety incidents—such as concerning AI behavior or security breaches—must be reported promptly. This transparency helps regulators and the public stay informed about potential dangers.
- Risk Assessment and Mitigation: Firms are required to conduct thorough risk assessments, considering catastrophic scenarios like mass casualties, billion-dollar economic damages, or the facilitation of large-scale criminal acts.
- Third-Party Auditing: Independent audits are mandated to ensure companies are complying with the law’s requirements.
Enforcement and Safe Harbor
To ensure compliance, the RAISE Act empowers the New York attorney general to impose civil penalties of up to $30 million for violations. However, the law also recognizes the need to protect sensitive information. Companies are allowed to redact certain details from public safety protocols to safeguard trade secrets, comply with legal requirements, and protect privacy.
How Is This Different from California’s Approach?
New York’s RAISE Act was crafted with lessons from California’s failed SB 1047 in mind. Notably, it:
- Does not require a "kill switch" for AI models.
- Does not hold companies liable for harms caused by post-training modifications.
- Exempts universities and research institutions.
- Sets a high computational threshold, shielding startups and smaller firms from unnecessary regulation.
The Bigger Picture: State vs. Federal Regulation
The RAISE Act is part of a larger national conversation about how best to regulate AI. While some argue that state-level laws could create a confusing patchwork of rules, others see them as necessary steps in the absence of comprehensive federal legislation. The debate also touches on America’s ability to remain competitive in the global AI race while ensuring public safety.
What Should Companies Do Now?
If you’re operating a frontier AI model in New York, it’s time to:
- Start developing robust safety and security protocols.
- Set up systems for incident detection and reporting.
- Prepare for third-party audits by organizing documentation and identifying qualified auditors.
- Conduct a legal review of your operations to ensure readiness for the new regulatory landscape.
Key Takeaways
- The RAISE Act targets only the most advanced AI systems, leaving startups and researchers free to innovate.
- Companies must publish safety protocols, report incidents, assess risks, and undergo audits.
- Civil penalties for non-compliance can reach $30 million.
- The Act balances transparency with protections for trade secrets and privacy.
- New York’s approach could influence future state and federal AI regulations.