Ever wondered if you could use ChatGPT to help draft an email or summarize a report for your government job? In Delaware, state employees now have a clear answer. The state has taken a proactive step into the future of work, releasing a comprehensive policy to guide its workforce on the responsible use of generative AI.
The Delaware Department of Technology and Information (DTI) recently unveiled its 'Enterprise Policy on Generative Artificial Intelligence,' a document designed to demystify the use of powerful tools like ChatGPT in a government setting. As employees increasingly use AI in their personal lives, questions about its place in the workplace have surged. This new policy aims to provide clarity and confidence.
Drawing a Line in the Sand: Public vs. Enterprise AI
The cornerstone of the new guidelines is the distinction between two types of AI tools:
- Public GenAI Tools: These are the consumer-grade platforms we're all familiar with, such as ChatGPT. The policy permits their use for learning and non-sensitive tasks but strictly prohibits inputting any confidential state data. A key rule is that any prohibited tools include those originating or located outside the United States.
- Enterprise GenAI Tools: These are systems specifically contracted and licensed by the state. They are designed to be secure, protect state data, and integrate with government identity management systems. Think of them as a secure, private playground for innovation.
Anthony Collins, DTI's Director of Enterprise Architecture and Solution Integration, emphasized the policy's role in safeguarding information. “This is the first step... to help the employees know what that acceptable use is, and how we, together, can protect the valuable data assets that we have within the state,” he explained.
Empowering Employees, Responsibly
The goal isn't to stifle creativity but to channel it safely. Before an employee can use a powerful enterprise AI tool, the task must go through an approval process, ensuring that data stewards understand the risks and benefits involved. This strikes a crucial balance between empowerment and governance.
State officials have welcomed the move. Commission member Owen Lefkon noted the enthusiasm among staff, stating, “Everybody is saying, ‘I want to use AI to help me do my job.’ We, of course, want to empower our staff. But we want to do it in a compliant manner, with the appropriate training.”
What's Next for Delaware?
This policy is just the beginning. A subcommittee of the Delaware AI Commission is now tasked with developing employee training programs to ensure everyone understands how to use AI in a “responsible, as well as principled manner.” Furthermore, the state is exploring the creation of an 'agentic AI sandbox'—a testing ground for more advanced, autonomous AI technologies.
Delaware's forward-thinking approach provides a valuable blueprint for other governments navigating the complexities of the AI revolution, proving that innovation and security can go hand in hand.
Key Takeaways:
- Clear Guidelines: Delaware now has a formal policy for state employees using generative AI.
- Data is Priority: Confidential state data is strictly forbidden from being used in public, consumer-grade AI tools.
- Two-Tier System: The policy distinguishes between public AI and secure, state-approved 'enterprise' AI tools.
- Approval for Power: Using enterprise AI for specific tasks requires approval to manage risks.
- Focus on Education: The next step involves comprehensive training for all state employees.