The term artificial intelligence may conjure images of a dystopian world where humanity has been enslaved and taken over by the machines. While that makes for great entertainment, the reality is something quite different.
Artificial intelligence (AI), machine learning (ML), and robotic process automation (RPA) are all variations on a theme. They are technologies that can help organizations—including government agencies—automate what are often manual and time-consuming processes. For example, using RPA, a state agency can program a bot to perform time-consuming, repetitive tasks – such as confirming benefits eligibility – by reviewing data in select fields in an online form. This frees up employees to perform higher-value work that requires judgment and creativity.
But new technologies also present new challenges of how to use it effectively and avoid misuse. AI and related technologies are relatively new, so overarching standards on how to best use these technologies is still nascent
Earlier this year, the National Institute of Standards and Technology (NIST) released its new Artificial Intelligence Risk Management Framework (AI RMF). “The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems,” the document states. “The Framework is intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society to implement the approaches in the Framework.”
In addition to the framework, NIST also released a playbook that helps organizations navigate and use the AI RMF. The playbook is broken down into four functions:
- Govern – A continual and basic requirement for effective AI risk management over the lifespan, enabling the other AI RMF functions. The Govern function fosters a culture of risk management within organizations designing, developing, deploying, or acquiring AI systems. Categories in this function interact with each other and with other functions but do not necessarily build on prior actions.
- Map – This function establishes the context to frame risks related to an AI system. Without contextual knowledge, and awareness of risks within the identified contexts, risk management is difficult to perform. Map is intended to enhance an organization’s ability to identify risks and broader contributing factors.
- Measure – This function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.
- Manage – This function utilizes systematic documentation practices established in Govern, contextual information from Map, and empirical information from Measure to treat identified risks and decrease the likelihood of system failures and negative impacts. Risk treatment comprises plans to respond to, recover from, and communicate about incidents or events.
States should look at the playbook, review the functions and then look at their deployment. By looking at the functions and subcategories states can ask question, prepare necessary documentation, and make sure the AI deployment fits the needs of the agency and the state.
While that document is still new, using sound, fundamental cybersecurity best practices – such as NIST’s Cybersecurity Framework (CSF)—can also be leveraged as a baseline. Basically, if the CSF wouldn’t let you do it, then don’t do it. There may not be an apples-to-apples comparison for AI related technologies, but the core message carries forward.
States considering deploying AI should leverage the documentation that exists and take a risk-based approach to deploying these technologies. Just because the technology is new and the standards, frameworks, and guidance may not be as mature as other cybersecurity documentation, state governments shouldn’t be avoiding it altogether. State CIOs and CISOs should question how this new technology will be used and think critically about its applications.
AI-related technologies can help states realize efficiencies and save money. Use the documentation available, think critically, and work with vendors to make sure the applications are appropriate. Use pilot programs to build internal capacities for managing this new technology. Following these steps can help states realize some of the benefits of AI.