Mackenzie Arnold Participates in GAO Panel on AI Privacy Risks
Background
Between January 13–15 2025, the U.S. Government Accountability Office (“GAO”) convened a panel of experts to examine privacy risks associated with artificial intelligence. The panel brought together 12 specialists from federal agencies, academia, nongovernmental organizations, and private industry, to inform recommendations to the Office of Management and Budget (“OMB”).
Mackenzie Arnold, LawAI’s Director of U.S. Policy, participated alongside leading figures in AI, law, and policy, including Dominique Duval-Diop (U.S. Department of Commerce), Jennifer King (Stanford Institute for Human-Centered Artificial Intelligence), and Deirdre Kathleen Mulligan (White House Office of Science and Technology Policy).
The panel identified 13 challenges related to protecting privacy when using AI, including:
- Lacks of skills among federal workforce to implement AI while mitigating privacy risks
- Scalability of implementing AI systems with privacy protections
- Auditing and evaluating AI models with sensitive information
Conclusions
The panel found that AI systems may reveal sensitive information in raw data sets, potentially exposing personal and private information, among other privacy risks. They also identified several challenges that federal agencies face in addressing these risks, including trade-offs between model performance and the modification or removal of data for the purpose of privacy.
In March 2026, GAO issued two recommendations for Executive Action to OMB:
- Specify examples of known privacy-related risks that agencies should consider when updating their policies as they pertain to AI.
- Facilitate additional information sharing or issue government-wide guidance related to:
- how agencies should consider privacy when evaluating and auditing AI models that contain sensitive information;
- storing data in a manner where sensitive data can be separated from the dataset;
- clear rules, norms, and best practices with respect to privacy that agencies should use when developing AI solutions internally;
- performance metrics agencies can use to assess privacy-related impacts when using AI;
- actions agencies can take to ensure that members of the public who interact with their AI technologies understand what they are consenting to;
- technological tools agencies can use to protect sensitive data when using AI;
- incorporating AI-specific considerations into privacy impact assessments, including identifying risks and informing the public about how PII is involved in the use of AI; and
- potential tradeoffs between privacy and performance agencies can consider when using AI.
The full GAO report is available here.