State governments across the U.S. are embracing artificial intelligence to improve efficiency and effectiveness, but with every new AI proposal comes a critical question: when should the government say “no”? The answer, as it turns out, often hinges on one key factor—data privacy.
The Rise of AI Sandboxes in Government
Imagine a bustling innovation lab where state officials test new AI tools, hoping to streamline services or enhance public safety. These “AI sandboxes” are becoming more common, providing a safe space for experimentation. But as Colorado’s Chief Information Officer, David Edinger, explains, not every idea makes it out of the sandbox.
The Vetting Process: More Than Just Good Intentions
Colorado’s experience is telling. Out of roughly 120 AI proposals reviewed, most rejections weren’t about the technology’s purpose, but about how data would be handled. The state uses the NIST (National Institute of Standards and Technology) framework to classify risk levels—medium, high, or prohibited. Proposals flagged as “high risk” undergo extra scrutiny, especially regarding data sharing and privacy.
For example, if a project requires sharing personally identifiable information (PII), health data (HIPAA), or criminal justice information (CJIS) in ways that violate state law, it’s a non-starter. The intention behind the AI tool might be noble, but if the data practices don’t meet strict government standards, the answer is a firm “no.”
Data Privacy: The Deciding Factor
This focus on data isn’t unique to Colorado. California’s Chief Technology Officer, Jonathan Porat, highlights three pillars for evaluating AI use cases:
- Appropriateness of the use case
- Track record of the technology
- Data governance and security
Porat’s team asks tough questions: Are the data sets suitable for generative AI? Are they governed and secured properly? If the answer is uncertain, the proposal is likely to be rejected.
Actionable Tips for AI Innovators and Public Sector Leaders
If you’re an organization hoping to partner with government, or a public sector leader considering AI adoption, here are some practical steps:
- Prioritize Data Privacy: Ensure your data practices comply with all relevant laws and regulations. Be transparent about what data is collected, how it’s used, and who has access.
- Use Established Frameworks: Adopt risk assessment models like the NIST framework to evaluate AI projects objectively.
- Engage Privacy Experts: Involve legal and data privacy professionals early in the process to identify potential red flags.
- Maintain Transparency: Keep stakeholders informed about data usage and AI decision-making processes.
- Regularly Review Policies: Technology evolves quickly—so should your data governance policies.
The Takeaway: Responsible Innovation
The path to AI-powered government is paved with both opportunity and responsibility. By putting data privacy at the center of decision-making, states are setting a high bar for innovation that truly serves the public good.
Key Points:
- Most government AI rejections are due to data privacy concerns, not the intended use.
- States use frameworks like NIST to assess risk and guide decisions.
- Data governance, security, and compliance are non-negotiable for public sector AI projects.
- Organizations can improve approval chances by prioritizing privacy and transparency.
- Ongoing policy review is essential for responsible AI adoption.