skip to content
Primary navigation

Public Artificial Intelligence Services Security Standard

The Public Artificial Intelligence Services Security Standard guides the responsible use of Artificial Intelligence (AI) technologies and services by State of Minnesota employees. Specifically, the standard provides recommendations on the use of AI to improve personal productivity and efficiency, while providing best practices to prevent the release of private, sensitive, or protected data.  

Entities the standard applies to

The standard provides guidance on how to use publicly available AI services and applies to all employees, contractors, and third parties who develop, deploy, or use AI systems and applications within the state of Minnesota government. 

Reasoning for the standard

Artificial intelligence services provide an opportunity to increase the productivity of state employees. In order to realize this potential, it is important to use AI tools appropriately and consider potential legal, practical, security, and privacy issues. The standard was created to protect the safety and security of information held by Minnesota state agencies. Examples of publicly available AI services include ChatGPT, Google Bard, and Microsoft Bing. 

TAIGA

This standard was developed by the Transparent Artificial Intelligence Governance Alliance (TAIGA) in partnership with MNIT. The group was convened to work on issues related to AI policy, governance, and usage and to help develop processes and structures for the safe deployment of the technology to improve lives. 

Additional resources

back to top