Can cyber nullify nuclear deterrence?

US military personnel observe an atom bomb test in 1951.

US military personnel observe an atom bomb test in 1951.
Photo: Everett Collection/Shutterstock

26 August 2025

Nuclear deterrence depends on the brutal certainty that a launched weapon will reach its target and trigger catastrophic destruction. But what if that certainty were no longer guaranteed? Even worse, what if nuclear weapons could backfire?

As conventional cyber capacities and artificial intelligence-enabled systems advance, they are beginning to transform the role of deterrence. This transformation is urgently needed – we have a moral imperative to develop forms of deterrence that are not existential risks to human civilisation. Modern cyber capabilities can expose the fragility of nuclear deterrence and push the world toward safer alternatives.

Deterrence under cyber pressure

Nuclear deterrence only works when the threat of nuclear use is credible. This credibility demands that the chain of command from leader to launcher is reliable beyond doubt. However, as Andrew Futter writes in ‘Hacking the Bomb’, cyber operations introduce both direct and indirect risks to nuclear command and control, early warning, and weapons systems. Similarly, Herbert Lin shows in ‘Cyber Threats and Nuclear Weapons’ that cyber operations can threaten the integrity of a nuclear arsenal in various ways. 

A particularly striking example is the Stuxnet attack, which demonstrated direct risk by physically damaging Iran’s uranium enrichment centrifuges. This sophisticated operation also highlighted indirect risks by manipulating system feedback to show false ‘all-is-well’ signals, corrupting the information operators relied on and showing how espionage can precede destructive cyber operations.

Indeed, malware, spoofed signals, and supply chain backdoors can quietly infiltrate key subsystems long before any visible issues emerge. Full system failure is not required to undermine deterrence; compromising a single node, by manipulating sensor inputs, corrupting targeting data, or delaying authentication can introduce hesitation at a critical moment. 

This risk grows as modern military platforms become more digitally complex. Take, for instance, modern strategic bombers that rely on integrated avionics and satellite-linked mission systems. Here, even secure communication nodes, assumed to be hardened, often run on layers of commercial-grade firmware and software with exploitable vulnerabilities. While air-gapped systems and analogue fallbacks remain in place, they only protect the final layer of digital assets. No digital system is fully immune to persistent, well-resourced intrusion over time.

There is already evidence these dynamics are playing out. Several North Korean missile test failures in 2017, including multiple failed Hwasong-12 launches, may have resulted from covert cyber interference targeting guidance systems and propulsion. Whether intentional or coincidental, the failures fuelled uncertainty in Pyongyang’s arsenal and weakened its deterrent credibility.

When dysfunction deters

The possibility that cyber operations may cause nuclear weapon systems to malfunction – perhaps even backfiring against the launching state – opens the door to a novel form of strategic leverage: deterrence by dysfunction.

Unlike deterrence by punishment, which threatens overwhelming retaliation, or deterrence by denial, which aims to block an adversary’s objectives, deterrence by dysfunction targets something more intimate: fear that pressing the button could trigger unintended consequences for one’s own side. In such a scenario, launching a nuclear strike becomes not just risky, but potentially self-endangering. That is a different kind of uncertainty, one that erodes the very logic of nuclear use.

Imagine a scenario in which military leaders are unsure if a missile will follow its programmed trajectory, whether targeting data has been silently altered or whether the authentication process has been spoofed without detection. The most chilling possibility is not that a strike fails, but that it succeeds in hitting the wrong target, potentially even within the launching state itself. That possibility, even if remote, fundamentally alters the psychology of deterrence.

This scenario is not without precedent. In 1991, a flaw in US nuclear targeting software that would have caused a missile to miss its intended target was discovered – an error that had gone undetected for months. More recently, in 2010, the US temporarily lost control of fifty ICBMs at F E Warren Air Force Base due to a dislodged computer circuit card, prompting concerns that hackers could exploit similar weaknesses to disable or misdirect missiles. Combined with AI-driven attack planning, cyber operations could, in principle, mimic internal failure, jam real-time updates, or flip command logic at just the wrong moment. 

Even a one percent chance of failure could be enough to induce strategic hesitation, especially when the stakes involve irreversible catastrophe. A vivid example comes from the 2010 Eyjafjallajökull volcanic eruption, which led European governments to shut down much of the continent’s airspace, grounding over 100,000 flights. The decision was made despite highly uncertain models and a very low estimated probability that ash would actually cause engine failure. Yet the mere possibility of disaster prompted authorities to accept billions in economic losses rather than risk a single catastrophe.

Such a psychological effect does not necessarily require a proven successful cyberattack on a nuclear arsenal. It only requires that decision-makers believe that compromise is plausible and that they cannot be fully certain of system integrity in real time. 

The natural response may be to invest heavily in hardening: improved cybersecurity protocols, quantum encryption, analogue redundancies, and AI-driven anomaly detection. But no digital system is invulnerable to sustained, well-resourced intrusion. And the more technologically advanced a nuclear force becomes, the more it relies on a complex web of interdependent systems, each offering a potential point of failure. What erodes deterrence is not necessarily a single dramatic breach but the slow accumulation of doubt.

If capable adversaries begin issuing consistent, credible cyber threats, especially those targeting command-and-control infrastructure or critical subsystems, the psychological burden on nuclear decision-makers will grow. Even without a demonstrated attack, the possibility that one’s arsenal could be silently compromised or manipulated introduces a novel layer of friction. In the nuclear realm, where the cost of miscalculation is existential, even a small degree of uncertainty may be enough to erode the appeal of nuclear deterrence itself.

Nuclear deterrence: the beginning of the end?

If states were to adopt a cyber doctrine aiming to undermine confidence in rivals’ nuclear weapons systems, the implications would be profound. For countries facing nuclear-armed rivals, credible cyber operations could serve as a digital counterforce: undermining confidence in nuclear systems without ever needing nuclear weapons of their own. This shift could alter the incentives that often lead to conflict. A state like Iran, for instance, might find that it can influence the behaviour of a nuclear-armed rival without pursuing a bomb of its own. This could help eliminate one of the most dangerous drivers of war: the fear that a rival is about to cross the nuclear threshold and must be stopped with a preemptive strike.Ultimately, humanity has a responsibility to pursue forms of deterrence that do not threaten to end life on Earth. Cyber may seem like an unlikely ally in that effort. But, by making nuclear use seem increasingly reckless, it may become one of the most powerful tools we have to undermine nuclear weapons and build a safer future for generations to come.