Public Artificial Intelligence Services Security Standard Frequently Asked Questions
Artificial Intelligence (AI) has increasingly become a part of our everyday lives, and our work lives are no exception. Publicly available AI services, like Bing and ChatGPT, offer ways to improve productivity and efficiency; however, also come with risks including the sharing of private, sensitive, or protected data.
To reduce those risks, the Transparent Artificial Intelligence Governance Alliance (TAIGA) in partnership with MNIT security teams developed the Public Artificial Intelligence Services Security Standard to guide the responsible use of AI in personal tasks by state of Minnesota employees.
AI services, like Bing and ChatGPT, offer opportunities to improve productivity and efficiency; however, they also come with risks. The State of Minnesota is committed to providing secure technology solutions. The standard provides guidance on the responsible use of AI enhanced services for individual work tasks by State of Minnesota employees.
It was developed by the Transparent Artificial Intelligence Governance Alliance, a group convened by MNIT, to provide a framework for using AI in a safe and productive manner for the benefit of all Minnesotans.
This standard applies to all employees, contractors, volunteers, and third parties who develop, deploy, or utilize AI systems and applications within Minnesota state government.
AI refers to the simulation of human intelligence processes by computer systems. If an application can independently perform tasks that would otherwise require human intelligence, such as understanding natural language, recognizing patterns, making decisions, or interpreting complex data, it likely uses AI.
Publicly available AI services can be helpful for a variety of tasks. People can input questions into publicly available AI-enhanced services. The responses mimic humans but because the AI service is not a human subject matter expert it is at a risk of providing responses that are inaccurate or incomplete. Current AI services do not understand questions, they generate word patterns that mimic content they have been trained to use.
Additionally, when you submit data to an AI service, it leaves a copy of the submitted data with the service. This may pose security and privacy risks. These risks are magnified if the AI service automatically incorporates submitted data into responses shared with other users as part of the data they are trained to use.
For these reasons, it is important to use these services responsibly and consider potential legal, practical, security, and privacy issues. The content produced by available AI services should be consistently and skeptically reviewed.
State of Minnesota staff and contractors should follow this standard when using AI services to protect information according to MNIT Security Policies and Standards.
Minnesota IT Services convened a group called the Transparent Artificial Intelligence Governance Alliance (TAIGA) to work on issues related to AI policy, governance, and usage and to help develop processes and structures for the safe deployment of the technology to improve lives. TAIGA is committed to balancing confidence and skepticism about the new technology by embracing transparency, security, and equity with human oversight and focusing on accuracy, accountability, and safety to deliver value and benefits to Minnesotans.
Boreal forests, or taiga, are the Earth's northernmost forests. They are foundational to the planet’s ecosystem for carbon storage, clean water, and more. Early artificial intelligence (AI) services have been foundational to our lives for several years, but new capabilities will soon be pervasive in our technology biome. TAIGA combines this idea with transparency and governance to form an alliance dedicated to working on artificial intelligence.
Using AI services as an employee of the State of Minnesota
Yes, in certain circumstances. At this time, publicly available services such as ChatGPT (OpenAI), Bing (Microsoft), and Bard (Google) have only been approved for use when the information you enter is categorized as low by our Data Protection Categorization Standards.
Those definitions are as follows:
Low: Data that is defined by Minnesota Statutes Chapter 13 as “public” and is intended to be available to the public.
Moderate: Data that does not meet the definition of Low or High. This includes but is not limited to system security information, not public names, not public addresses, not public phone numbers, and IP addresses.
High: Data that is highly sensitive and/or protected by law or regulation. This includes but is not limited to: Protected Health Information (PHI), Social Security Administration (SSA) Data, Criminal Justice Information (CJI), Government-issued ID Numbers (e.g., Social Security Numbers, Driver’s license numbers / State ID Card numbers, Passport Numbers), Federal Tax Information (FTI), (PCI) Account Data, Bank account numbers excluding State-owned bank account numbers.
At this time, those tools can be used for individual tasks that improve the way you work. Examples include:
Summarizing long documents that only contain public data.
Researching public topics in which the content can be verified by a subject matter expert.
Generating documents that only deal with public information.
If you have questions on how to use AI responsibly email taiga@state.mn.us.
AI can be used as a service to improve personal productivity. At this time, commercially available AI services may only be used for individual tasks that improve the way employees work. Examples of acceptable use cases include:
Summarizing long documents that only contain public information.
Researching topics where the resulting content can be verified by a subject matter expert (SME).
Generating draft documents that deal with public information.
Developing sample programming code that does not contain non-public information. Refer to Government Technology’s 50 ChatGPT Prompts for State and Local Government as a guide to find example prompts to increase efficiency, eliminate manual administrative work, and enhance day-to-day tasks using generative AI technologies like ChatGPT. What tasks should AI not be used for?
It should not be used for tasks that State of Minnesota staff and contractors wouldn't publicly acknowledge. Examples of currently unacceptable use cases include:
Automatically responding to email messages without first reviewing content for accuracy and appropriateness.
Decision-making in situations where outcomes have not been verified by a subject matter expert. For instance, using AI to generate a list of possible hiring criteria for a new position, but not asking a human resources SME to review those criteria before posting the job.
Building automation tools that share AI-generated content without first consulting TAIGA about whether the source, capabilities, and testing performed on the tool conform to required governance.
Sharing any confidential data, not public data or nonpublic data with AI services as those terms are defined below. Employees who share not public or nonpublic data will be subject to disciplinary action up to and including discharge.
Public data is information accessible to the public. All data collected, created, received, maintained, or disseminated by a government entity is considered public unless otherwise determined by law or classification. It's wide-ranging and can include administrative documents, official texts, public surveys, reports, regional data, and more.
Not public data are any government information classified by statute, federal law, or temporary classification as confidential, private, nonpublic, or protected nonpublic. Data not on individuals made by statute or federal law applicable to the data: (a) not accessible to the public; and (b) accessible to the subject, if any, of the data.
Confidential data is information made not public by statute or federal law. There are too many examples of confidential data to mention here, and so as a rule, if you are not certain the data you are sharing is public, please contact TAIGA for advice (taiga@state.mn.us), or simply refrain from sharing it with AI services.
If you are uncertain whether a service incorporates AI technologies and/or whether you are allowed to use the service, contact the Secure Systems Engineering and Architecture Team for guidance (sse@state.mn.us). The team will review and verify AI services to understand the AI's training, ownership of data, and level of security.
Any software or service where a third-party AI technology has access to State of Minnesota information needs to be approved by the MNIT Secure Systems Engineering and Architecture Team before being used.
If there is uncertainty about the appropriateness, safety, or ethical implications of AI use, please contact TAIGA for advice (taiga@state.mn.us).
The security standard is a guardrail for the safety of Minnesotans. The standard is the first of several steps taken by TAIGA to ensure that the rapidly developing AI technology is not misused and will work collaboratively with other partners to develop processes to help catalog, evaluate, and implement technologies that incorporate artificial intelligence to serve people better.