The United States has unveiled measures to harness rapidly developing artificial intelligence (AI) technologies in meeting national security objectives, including implementing safety and dependability standards.
The National Security Memorandum (NSM) on Artificial Intelligence, released in October 2024, establishes parameters for acceptable AI use. President Joe Biden requested the document in an executive order to ensure the U.S. “leads the way in seizing the promise and managing the risks of [AI].” The Framework to Advance AI Governance and Risk Management in National Security supplements the NSM.
Release of the memo and framework follow an international “blueprint for action” on the responsible use of AI in the military signed by 61 states in Seoul, South Korea, in September 2024. The blueprint focuses on action-oriented developments such as AI-enabled drones used by Ukraine to defend against invading Russian forces, Reuters reported. It addresses AI-enabled weapons, AI decision-support systems, and the use of AI in cyber operations, electronic warfare, information operations and nuclear endeavors.
The People’s Republic of China was among the nations that attended the Responsible AI in the Military Domain summit but did not support the nonbinding document.
The 2024 NSM is grounded in the belief that AI advances will influence national security and foreign policy, a White House brief states. It calls for:
- Ensuring the U.S. leads the world in developing safe, secure and trustworthy AI. That necessitates an availability of semiconductors for AI applications, effective counterespionage strategies, reliable and safe AI technology, and support for AI research by universities, civil society and businesses.
- Developing AI technologies to bolster national security while protecting human rights and democratic values. The framework’s guidance on NSM implementation includes mechanisms to manage risks and evaluate systems, and ensure accountability and transparency.
- Advancing international consensus and governance on AI. The NSM directs the U.S. to collaborate with its Allies and Partners to “establish a stable, responsible, and rights-respecting governance framework to ensure the technology is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms,” the White House stated.
The NSM addresses accountability in military operations, stressing that countries and commanders are responsible for outcomes regardless of AI’s role in an activity, the Center for Strategic and International Studies, a U.S.-based think tank, reported in October 2024.
Information manipulation is particularly prevalent in the Indo-Pacific, where the rise of AI-powered malign information challenges national security and stability, the Rand Corp., a U.S.-based research group, reported in March 2024. The development of AI technologies, particularly advanced language models such as OpenAI’s ChatGPT, Google’s Gemini and Meta’s Llama 2, has expanded the scope of such activities.
Partner nations should counter malign information campaigns with dynamic and proactive strategies, combining vigilance, rapid response and multinational cooperation, the Rand Corp. stated.