INTELBRIEF

December 15, 2023

IntelBrief: Tensions Between Transparency & Security in Biden’s Executive Order on Artificial Intelligence

AP Photo/Evan Vucci

Bottom Line Up Front

  • A sweeping artificial intelligence (AI) Executive Order (EO) by the Biden White House tasks U.S. agencies with developing new security and safety standards while encouraging innovation and U.S. leadership.
  • While the EO attempts to mitigate AI risk and maximize benefits for national security and law enforcement, critics argue government use of AI-enabled facial-recognition tools poses civil rights concerns.
  • Proponents of open-source AI models warn the EO may limit the public’s ability to detect vulnerabilities in critical AI models, potentially leaving them exploitable by malicious actors.
  • Competing visions on openness, transparency, and competitiveness in AI/ML development will likely complicate security and safety regulation efforts in the months ahead.

Sweeping and the first of its kind, a recent U.S. Executive Order (EO) seeks to establish new safety and security standards in Artificial Intelligence and Machine Learning (AI/ML) development, manage risks associated with dual-use models that can be applied to wide-ranging use cases (i.e., foundation models), crackdown on AI-enabled deception, facilitate innovation, ensure responsible use of AI by government agencies, and protect civil rights. U.S. President Joe Biden signed the EO on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in late October. Calls for comprehensive AI/ML regulation have been growing for several years, catalyzed by the extensive adoption of large language models and other forms of generative AI. Widespread access to AI applications like ChatGPT has underscored the risk of malicious use of these models by adversaries for disinformation, cyberattacks, and other threats. The EO tasks several federal agencies with further developing security and safety standards and procedures to curb such risks while encouraging innovation and U.S. leadership in the field of AI/ML.

The EO has direct implications for national security. Most importantly, it calls for the development of a National Security Memorandum on AI that will outline requirements for the U.S. Department of Defense and Intelligence Community to both optimize AI capabilities to further U.S. national security and redress the potential use of such capabilities by adversaries to threaten the United States and its allies. While the EO includes several provisions to protect civil rights, the American Civil Liberties Union (ACLU) has raised alarm about inadequate safeguards from AI capabilities such as facial recognition in law enforcement and immigration services. The EO tasks the Attorney General to submit a report that addresses the use of AI in the criminal justice system by October 30, 2024. Thus, the consequences of the EO on the criminal justice system will largely depend on the results of that report.

Of the many safety provisions in the EO, the new obligatory reporting requirements for AI developers of certain large models (as defined by the amount of computing power applied toward training the models) is most significant. These requirements instruct developers to report to the Department of Commerce the results of safety tests, plans related to the training and development of dual-use foundation models, the acquisition of large-scale computing capabilities, and the procedures undertaken to secure model weights. Model weights are parameters that determine the strength of a connection between two neurons in order to help the model analyze data to recognize patterns and/or make predictions. It remains unclear what happens when any of the reported information is deemed inadequate by the Department of Commerce. Some AI founders worry these reporting requirements may stifle innovation and effectively lock in large incumbents such as OpenAI and Anthropic that have superior resources to navigate a complex regulatory environment.

Among AI experts, differing reactions to the EO have emerged, particularly on the issue of openness. Openness in the context of AI/ML refers to releasing the source code of models publicly, thereby promoting free access to foundational AI models. Proponents of openness argue their approach makes public scrutiny possible, thereby helping to identify potential biases, vulnerabilities, and ethical concerns and ensuring responsible development of AI models. However, critics of open sourcing argue that releasing detailed information about models may make it easier for malicious actors to exploit vulnerabilities and re-use the model for unintended purposes. In an open letter to President Biden, several high-level AI developers, researchers, and founders – including lauded computer scientist Yann LeCun, one of the founding fathers of convolutional neural networks, and businessman Marc Andreessen, who developed one of the first widely available web browsers – called for reversing some parts of the EO which seem to endorse the skeptical view of AI openness. Particularly contentious is a provision that requires AI developers to protect model weights, effectively restricting developers from open-sourcing their code for dual-use foundational AI models. By nudging the safety debate away toward opacity, the EO may have wide-ranging impacts on security and civil rights. For example, without access to source code, it is difficult to attribute a model’s behavior to specific features. In the context of law enforcement, this reduced accountability can be especially problematic and exacerbate bias. The widespread use of closed-source models in the domain of national security may also lead to vulnerabilities in critical AI applications going undetected. Developers unaware of the holes in their systems will not be able to create patches to solve them, leaving them exploitable for any foreign adversaries who can discover them before they become more widely known.

U.S. federal agencies tasked by the EO now have been given different deadlines (from ninety to 365 days) to fulfill their duties and come up with procedures and guidelines AI developers will need to adhere to. Meanwhile, the EU passed the AI Act on December 8 after months of discussing issues such as the use of AI by law enforcement. Targeting certain high-risk applications, the AI Act goes further than the EO by prohibiting certain AI applications outright, including biometric categorization systems that use sensitive characteristics and emotion recognition in the workplace and educational institutions. However, French President Emmanuel Macron has already criticized the legislation for potentially limiting the innovation and competitiveness of European AI developers. A lack of consensus on the impact of the EU AI Act will likely delay its implementation. Competing visions on openness, transparency, and competitiveness in AI/ML development – not yet hashed out within the AI/ML community – will likely continue complicating security and safety regulation efforts in the months ahead.

SUBSCRIBE TO INTELBRIEFS