Artificial intelligence and the future of espionage

Artificial intelligence and the future of espionage

Artificial intelligence is not simply another technological tool; it’s a transformative force reshaping the foundations of intelligence and national security. In remarks to the 11 October Aspen-Otago National Security Forum in New Zealand, Anne Neuberger, former US deputy national security advisor for cyber and emerging technology, provided an illuminating account of how AI could revolutionise the future of intelligence operations, analysis and organisation, but also challenge democratic governance of national security.

Neuberger shared her thoughts in an interview she recorded with me for the forum, further to a seminal article she wrote for Foreign Affairs in January. These insights, drawn from her tenure at the National Security Agency (NSA) and the White House, highlighted a central tension for democracies: harnessing AI’s capabilities while preserving civil liberties, operational integrity and public trust.

At its core, intelligence work involves producing insightful analysis, delivering time-sensitive warnings, guiding strategic policy and delivering decision advantage. Yet, as Neuberger notes, the 21st-century explosion in data availability has made intelligence analysis increasingly, and ironically, difficult. Analysts have gone from too little information upon which to make judgments to too much in just a generation. AI’s ability to rapidly process vast datasets and extract meaningful patterns offers a solution, but only if agencies adapt organisational models and methodologies.

Traditional intelligence processes—sequential, compliance-heavy and human-driven—may be ill-suited to the speed and scale required. Neuberger argues that to fully realise AI’s potential, agencies must reinvent workflows. For example, AI models can detect behavioural patterns preceding missile launches or cyberattacks, collapsing time between collection and actionable insight. But if agencies cling to legacy processes, they risk missing windows of opportunity.

The strategic implications extend beyond reform within intelligence communities. Neuberger emphasises the competitive disadvantage democracies face compared with authoritarian regimes such as China and Russia. These hostile states operate with minimal guardrails, leveraging mass surveillance and centralised data control to train and wield powerful AI systems.

Yet Neuberger insists that democracies can compete, so long as they innovate responsibly. She cites the example of Las Vegas law enforcement, which deploys drones to crime scenes while deliberately avoiding inadvertent surveillance. Cameras are angled upward during transit and activated only upon arrival, balancing public safety with privacy. This kind of thoughtful deployment exemplifies how democratic values can coexist with technological advancement.

One of the most provocative ideas Neuberger addresses is the potential for AI to redefine the nature of intelligence product itself. In her January article, ‘Spy vs AI’, she envisioned policymakers interacting directly with intelligence databases through chat interfaces, querying thousands of existing finished intelligence assessments and judgments, and receiving synthesised, source-cited answers to their requirements in real time.

This alternative model challenges the traditional notion of ‘finished intelligence’, which relies on curated, analyst-driven products. While some may view this shift as a threat to analytical rigor, Neuberger sees it as an opportunity, particularly for future policymakers who need rapid access to historical insights. AI can make existing intelligence more accessible without replacing the nuanced work of generating new analysis or integrating complex datasets.

However, innovation in intelligence can be stymied by risk aversion, especially organisationally—a point made strongly in ASPI’s July report on the state of innovation in Australia’s National Intelligence Community. Neuberger’s experience as the NSA’s first chief risk officer revealed the agencies’ tendencies to compartmentalise risk in the absence of a holistic framework, focusing narrowly on operations, supply chains or analytic integrity.

After former NSA contractor Edward Snowden leaked classified documents, the agency recognised that its internal risk model was flawed. Neuberger implemented a system within the agency to assess and escalate risks based on complexity and potential effects. This approach fostered greater awareness across partner agencies and improved decision-making. Her mantra was ‘our job is not to avoid risk, but to take risk eyes wide open’. This is especially relevant in the AI era, where operational patterns are increasingly detectable by adversarial models.

Finally, Neuberger expresses concerns around the erosion of public trust in intelligence institutions. In democracies, she argues, trust is not a luxury but instead foundational to achieving intelligence outcomes. The Snowden controversies created a mistaken perception at the time that US and allied agencies operated without oversight, threatening agencies’ legitimacy and thereby their effectiveness.

To sustain trust, intelligence communities must communicate more transparently about their safeguards and values. Neuberger advocates for proactive engagement with journalists and the public, emphasising that advancing civil liberties is not an afterthought but a core intelligence mission objective. She also calls for more nuanced reporting on actual and prospective AI use in intelligence, urging commentators to compare current practices with historical norms rather than reacting reflexively.

Anne Neuberger’s reflections offer a useful guide for intelligence professionals navigating the AI frontier. The challenge is not merely technological; at its heart, it’s an institutional, ethical and strategic challenge, and one that would benefit from public research. Democracies must innovate boldly, govern wisely and communicate clearly to maintain their edge in the evolving contest of intelligence. AI may be a disruptive force, but with the right frameworks, it can also be a democratic asset.