New Technology and Nuclear Risk
In 2006, two leading scholars of the nuclear era warned that the age of mutually assured destruction (MAD) was ending. Seventeen years later, the authors are doubling down on these claims, arguing that the outbreak of new conventional conflicts has changed nuclear decision making, increasing the threat of coercive nuclear escalation. In an age of new technology, this warning is more pertinent than ever. The rapid introduction of emerging technologies and their weaponization raises concerns about maintaining strategic stability.
Artificial intelligence (AI) poses a special risk to the integrity of nuclear command, control, and communications (NC3). NC3 is necessary to ensure authorized employment and termination of operations, securing against accidental or unauthorized access resulting in the loss of control, theft, or unauthorized use of nuclear force. Observers warn that AI-powered cyberspace operations might put these missions at risk. Leaders who lack confidence in the integrity of their NC3 may not act with deliberation in a crisis. Instead, they might lash out.
AI-integration in NC3 systems could result in potentially catastrophic scenarios, raising the likelihood of nuclear conflict by creating the conditions for first-strike instability, unintended escalation, and widespread arms racing.
Strategic Stability
Nuclear weapons deter aggression because of the threat of retaliation. Would-be aggressors think twice when faced with national extinction. Strategic stability obtains with nuclear-armed rivals deter one another. Lasting stability obtains when states have no incentive to launch a first strike, when crises are manageable, and when arms races do not spin out of control.
When two rival states both possess sufficient nuclear weapons, no state can launch a pre-emptive attack without fear of a catastrophic response. A second-strike capability exists when a state has the ability to retaliate even after absorbing a nuclear attack, directing its response through surviving NC3 systems. If an aggressor surmises that the defender has no second-strike capability, the incentive to strike first and destroy the target is greater.
Crisis stability relies on mutual incentives to de-escalate during crises. Crisis stability does not seek to prevent political conflict but to avoid nuclear escalation. During the Cold War, the United States and the Soviet Union maintained crisis stability despite their deep antipathy. Despite escalated tensions, the fear of nuclear war restrained both sides during these conflicts.
Arms-race stability emerges when new technologies do not threaten the strategic balance. Defenses are robust and security is abundant under these conditions. But in other cases, defensive innovations can make others feel less secure. The emergence of new weapons systems often perpetuates the security dilemma, leading to arms racing and crisis instability. For example, the British development of the first dreadnought in the 1900s revolutionized naval power, sparking an intense arms race with Germany that lasted until the advent of World War I.
Emerging technology may cause something similar today. AI and cyber capabilities can be used to infiltrate and compromise NC3 systems, disrupting a nation’s ability to retaliate following a nuclear attack. The internet opens new avenues for espionage, proliferation, and information warfare that compromise the secrecy and security of nuclear arsenals. Advanced imaging technologies, such as high-resolution satellite imagery, can potentially expose the location of hidden nuclear facilities, making them vulnerable to preemptive strikes. This combination of new technologies threatens to unravel decades of nuclear stability.
Artificial Intelligence and NC3
AI is particularly concerning. Its appeal is obvious: artificial intelligence can source massive amounts of data across networks, process that data, and output recommendations to human operators – or execute tasks independently. AI integration increases the speed of data analysis, serving as a useful tool in conventional military scenarios. Integrating AI capabilities into NC3 systems would also allow the U.S. to detect and respond to nuclear attacks more efficiently. But speedy decision-making may work against stability, especially during crises that already compress the window for deliberation.
AI-enabled NC3 systems could undermine first-strike stability. AI applications that allow autonomous decision-making increases uncertainty in evaluating a potential threat or executing a first-strike. AI-integration introduces vulnerabilities that adversaries can manipulate. In a worst-case scenario, an adversary could disrupt NC3 system’s abilities to communicate, provide early warnings, or execute a second-strike. The risks inherent to the nature and limitations of AI technology can increase the risk of miscalculation.
AI integration can lead to accidental escalation. AI-enabled NC3 tools drastically shorten decision-making time. Automation quickens the pace of warfare, as machines execute complicated tasks in seconds. Fear of quickly losing can create incentives for rapid responses, increasing the chances of miscalculation. Reliance on AI for nuclear decision-making during a crisis might inadvertently compromise states’ ability to control escalation.
The asymmetric acquisition of cutting-edge AI technology creates the conditions for a destabilizing arms race. In response to unmatched AI innovation, adversaries will enter the race to acquire similar technology to level the playing field quickly. Other nuclear-armed states cling more tightly to their arsenals to increase security. Technologically disadvantaged states may seek to acquire nuclear weapons to bolster deterrence. These unstable conditions can increase tension and the likelihood of conflict between competing states.
Despite these risks, artificial intelligence integration could improve situational awareness. AI systems excel at pattern recognition. This capability could better detect threats, distinguishing real attacks from false alarms. The AI-integration of NC3 systems could reduce the number of near-calls by eliminating mistakes related to human error. A human operator in a false alarm situation is more likely to succumb to stress and cognitive bias, creating the conditions for miscalculation. AI, however, can quickly analyze incoming information to execute a decision based on observed patterns. This data-driven approach could avoid miscalculation. Impending NC3 modernization may see situational awareness benefits from AI, so long as the architects take the necessary steps to avoid associated risks.
About the Author
Anna Miskelley is a current graduate student in the School of International Service’s United States Foreign Policy and National Security Program, focusing on cybersecurity. Her research interests include emerging technologies, Chinese foreign and security policy, and international relations in East Asia.