GAO publishes AI accountability framework for agencies
Written by Dave Nyczepir
The Government Accountability Office has released its much-anticipated Artificial Intelligence Accountability Framework with the aim of monitoring how agencies are implementing emerging technology.
GAO’s framework describes key practices across four parts of the development lifecycle – governance, data, performance and oversight – to help agencies, industry, academia and nonprofits deploy AI responsibly.
Inspectors General (IGs), legal advisers, auditors and other agency compliance professionals needed a framework to conduct their own credible assessments of AI notwithstanding congressional audit requests.
“The only way for you to verify that your AI, in fact, is not biased is to do an independent verification, and that part of the conversation was largely missing,” Chief Scientist Taka Ariga told FedScoop data at GAO. “So given our oversight role, GAO decided to take proactive action to fill this gap and not necessarily wait for a certain plateau of technological maturity before resolving it. “
GAO would still be catching up, given how fast AI is advancing, otherwise, Ariga added.
AI systems are made up of components like machine learning models that must operate according to the same mission values. For example, self-driving cars with their cameras and computer vision are systems of systems all functioning to ensure passenger safety, and it is the responsibility not only of listeners, but also of ethicists and civil liberties groups to discuss the matter at hand. both their performance and their societal impact.
“We want to make sure that oversight is not treated as a compliance function,” Ariga said. “There are complicated risks around privacy, complicated risks around technology, purchasing and disparate impacts. “
GAO’s framework, released Wednesday, is a “forward-looking” way to address these risks in the absence of a standard AI-specific risk management framework, he added. The agency wants to ensure that risk management, monitoring, and implementation co-evolve as technology advances into what the Defense Advanced Research Projects Agency calls Wave 3: Contextual Adaptation, where models of ‘AIs explain their decisions to make new decisions.
Another goal of the framework is to include a human-centric element in the deployment of AI.
With agencies already providing AI solutions, GAO’s framework makes requirements, documentation, and evaluation of inherently government functions. Therefore, each practice described includes a set of questions that oversight bodies, auditors and third party evaluators should ask, in addition to procedures for the latter two groups.
The rights to audit AI, inspect models, and access data are central to their efforts.
“It will be detrimental in the long run if vendors are able to protect the intellectual property aspects of the conversation,” Ariga said.
Attempts to audit AI have already taken place, including the efforts of the Ministry of Defense when establishing the Joint AI Center in 2018. But the DOD encountered problems because there was no standard definition. and there was a lack of AI inventories to assess. Fast forward to the present day, and many companies now offer AI and algorithm assessments.
GAO is already using its new framework to investigate various use cases for AI, and IGs from other agencies have also expressed interest in using it.
“The timing is right as we actually have a number of ongoing national security, homeland security, justice engagements that involve AI,” Ariga said.
The framework will evolve over time, possibly into an AI dashboard for agencies – an idea proposed by former rep Will Hurd, R-Texas, in September.
Google and JAIC are considering an AI model or data cards, while nonprofits have come up with something more akin to a nutrition label, but GAO’s framework does not prescribe an accountability method. particular.– rather, it assesses the justification for the chosen mechanism.
Future iterations of the framework will also ask what transparency and explainability mean for different AI use cases. From facial recognition and self-driving cars to application filtering algorithms and drug development, each carries varying degrees of privacy and technological risk.
People won’t need a rationale for every turn made by a self-driving car, but they will eventually want to know why, to the nth degree, and the algorithm flags an MRI as abnormal in a cancer diagnosis.
“We knew it would have taken decades before we could issue something like this,” Ariga said. “So we decided to focus on the common elements of all AI development.”
At the same time, ministries like Transportation and Veterans Affairs have started to collaborate to develop their AI strategies, although the former focuses on safety and the latter on customer service. – taking into account their common problems of manpower, infrastructure, development and supply.
While developing the framework, Ariga said he was “surprised” that not everyone in government agreed with the notion of responsible AI.
Undergraduate data scientists don’t always receive training in ethics and instead learn to prioritize accuracy, performance, and trust. They are carrying that prospect with them into government jobs by developing the AI code, only for people to tell them to eliminate stigma for the first time, Ariga said.
At the same time, a competing camp argues that data scientists shouldn’t shape the world they should be, but mirror the one they live in, and AI biases and disparate impacts are someone’s problem. else.
The Ariga team kept this disagreement in mind, while engaging with government and industry AI experts and oversight officials, to avoid placing an undue burden on a group. when developing the GAO framework.
The government will eventually need to provide additional training on AI ethics for data scientists as part of workforce and implementation risk management, training that academia will adopt. probably – in the same way that medical ethicists were born, Ariga said.
“Maybe not tomorrow but certainly in the near future because, at least in the public domain, our responsibility to do things right is so high,” he said. “A lot of these AI implementations actually have life and death consequences.”
-In this story-
AI Accountability Framework, Artificial Intelligence (AI), Autonomous Vehicles, Bias, Data Science, Defense Advanced Research Projects Agency (DARPA), Department of Defense (DOD), Department of Transportation, Department of Veterans Affairs (VA) , ethics, facial recognition, Google, Government Accountability Office (GAO), Joint Artificial Intelligence Center (JAIC), supervision, Taka Ariga, Will Hurd