The Oregon State Government Artificial Intelligence (AI) Advisory Council has released its first approved AI Action Plan. The document outlines the state’s vision and guiding principles for using, adopting and advancing AI technologies as an integral part of operations. The AI Advisory Council was established in 2023 through an Executive Order, creating an authority responsible for ensuring AI implementation aligns with the state’s goals, ethics, values and policies. The AI Action Plan is the cornerstone of the council’s efforts, providing actionable recommendations for executive decisions, policies and investments to use AI to its fullest potential.
The plan’s vision intends to ensure that state employees are adequately equipped and trained to use AI as a component of informed decision-making processes. Ideally, the state will develop an AI ecosystem designed to enhance government efficiency, accountability and public trust while ensuring its use maintains privacy standards and ethical integrity.
One of the core executive actions to support AI deployment is to develop a reference architecture and policies for buying, developing, testing and auditing AI systems. A foundational element of the architecture will involve creating a state AI use case inventory to track AI activities across state agencies.
The fully realized mature AI reference architecture will encompass traditional layers describing data, development, deployment, operations, security and user experience. It will also describe AI-specific layers, including specialized hardware needed for performance, ethics and explainability, integration and AI audit and governance. Some of the high-level tasks recommended to develop the reference architecture include:
- Publishing statewide AI use inventory with accompanying deployment documentation.
- Defining high-level AI reference architecture with evaluation and approval processes.
- Creating a cross-agency advisory group to compare risk assessment, share best practices and refine processes for implementation.
- Recommending an AI testing capability and framework.
- Developing policies that cover power management protocols and server shutdown processes to prevent resource overuse.
A total of 12 guiding principles were listed in the action plan, which will collectively guide the state’s efforts to enhance transparency, equity, safety and security while ensuring its implementation has adequate oversight in the government setting. These principles include:
- Accountability through continuous audits and clear reporting.
- Equity and representation by clearly explaining decision-making processes to users and affected parties.
- Ensuring that deployed AI systems maintain explainability and trust by using transparent methods, data sources and design procedures. All AI usage will require informed consent.
- Protecting governance by ensuring all policies, processes, procedures and practices related to AI benefits and risks are in place and a culture of risk management is established.
- Defining clear structures and rules on how human oversight will be incorporated into the adoption, review and daily implementation of AI.
- Prioritizing privacy and confidentiality in AI systems and clarifying oversight responsibilities. All safety-related or emergency data use should be subject to extra review.
- Ensuring compliance with relevant regulations by identifying, assessing, measuring and managing all AI risks.
- Prioritizing efforts to ensure AI design and usage does not decrease overall safety and specifying its impact and safety requirements with quantifiable terms and measurement methods.
- AI must be used to improve efficiency and improve the user experience. It should not be adopted as the default solution, and integration of AI tools should be guided with a critical understanding of use cases, user experience and subject matter expertise within the organization.
- To foster transparency and trustworthiness, the state must emphasize the clarity, openness and comprehensibility of AI processes, outcomes, impact and decision backgrounds. All AI system development lifecycle steps should be documented and shared with the public and impacted individuals.
(Photo courtesy of tungnguyen0905.)