AI won’t clear the fog of nuclear war

Edward Geist ‘Deterrence under Uncertainty’
21 April 2026

Since 1945, nuclear strategists have lived with a question that refuses to go away: Can nuclear war ever be won? This question sits at the edge of doctrine and strategic imagination, revisited whenever new technologies promise sharper vision or better targeting. Yet for all the conceptual effort invested in it, it has never been tested in practice. Nuclear war has remained theoretical. This is not a simple matter of luck. Rather, it reflects a deeper structural reality: no major nuclear power has ever been sufficiently confident in its ability to control escalation or guarantee victory to risk learning the answer.

Each technological wave has nonetheless revived the temptation. Improvements in accuracy, surveillance, and computing have repeatedly raised the possibility that this uncertainty might finally be mastered. If only we could see more clearly, track missiles and submarines more reliably, and decide more quickly, perhaps nuclear war could become predictable enough to manage. Edward Geist’s ‘Deterrence under Uncertainty’ enters this familiar cycle at another moment of technological transition, when AI is being widely touted as a potentially powerful tool for removing uncertainty. Geist argues calmly, methodically, and clearly that the fog of war is not about to disappear.

He is not a futurist in search of disruption. He writes more like a technically astute strategist determined to restore realism to a debate that often swings between hype and alarmism. Despite the book’s clarity, it rewards attentive reading, particularly when it ventures into the mathematics and operational challenges of tracking survivable forces. However, these more technical discussions are, for the most part, confined to later sections and appendices. Readers do not need to understand every equation to grasp the strategic argument. What matters is the intellectual discipline behind it: that sensing and targeting in wartime are adversarial problems, not engineering problems awaiting an ultimate solution.

The cat-and-mouse problem

Geist’s central insight is simple: even if AI enhances data collection and analysis, the battlefield will not become transparent. In this cat-and-mouse game, the other side is not a passive object waiting to be discovered but an active agent attempting to deceive, conceal, and manipulate. Improvements in detection mean counter-improvements in evasion. Better classification systems invite spoofing and data poisoning. Every gain in visibility generates incentives to create new forms of obscurity. In this sense, AI may reduce some uncertainties while multiplying others.

Here, the book echoes a classical insight. The fog of war was never simply the absence of information. It also encompasses friction, human error, and the (largely) independent will of the adversary. New advanced technologies can mitigate some of these factors, but they also introduce new layers of complexity and fresh avenues for deception. One of the most dangerous consequences is high-confidence misinterpretation: situations in which decision-makers place unwarranted trust in assessments that appear precise, even though they rest on incomplete, misleading, or manipulated data. 

An AI-enabled system, for example, might confidently classify decoys or spoofed signals as genuine targets, creating an illusion of clarity where none really exists. Systems that promise transparency can therefore create vulnerabilities if their outputs are treated as more certain than they truly are. Decision-makers may be operating on fragile inferences that collapse under pressure. The result is not a fully transparent battlefield but a more intricate and contested epistemic environment, in which belief in one’s knowledge can shape decisions and outcomes as much as actual capability.

The stability of the fog of war

The book takes a somewhat ambivalent tone, and is pessimistic about technological salvation. AI will not deliver a reliable ‘splendid first strike’, nor does it eliminate the risks of escalation or miscalculation. At the same time, this limitation may offer some stability. A world in which nuclear powers genuinely believe they can execute clean, decisive first strikes would be far more dangerous than one in which uncertainty persists. The endurance of fog and friction constrains overconfidence. It keeps the most destabilising fantasies of control from becoming fully credible.

Although Geist’s analysis centres on nuclear deterrence, its implications extend beyond it. Conventional deterrence and even cyber competition are increasingly shaped by AI-enabled sensing and decision systems. In these domains too, the interplay between detection and deception, certainty and misinterpretation, is intensifying. Yet the nuclear realm remains distinctive because mistakes are most likely irrecoverable. In conventional war, tactical errors can sometimes be absorbed and corrected. In nuclear competition, the margin for learning is close to zero. This is why beliefs about vulnerability, control, and winnability carry such weight. They shape posture and perception long before any crisis begins.

The reassurance problem

One theme that emerges indirectly from the book, and perhaps deserves greater emphasis, is reassurance. Deterrence is often framed as the art of preventing undesirable action through credible threats. But long-term stability has always depended on a parallel process: reassurance, convincing adversaries that restraint will be rewarded. Deterrence without reassurance produces chronic paranoia; reassurance without deterrence invites opportunism. The relative durability of nuclear peace has rested on an uneasy combination of both. AI-driven uncertainty makes this balance even more pertinent. If new technologies amplify fears of a surprise attack or loss of control, then credible signals of restraint and mutual vulnerability are not luxuries but necessities.

Geist’s discussion of what he terms ‘reconstructivism’, the idea that strategic outcomes are shaped not only by material capabilities but also by the beliefs and expectations constructed around them, points toward this broader terrain. Technologies such as AI do more than provide information. They influence what decision-makers think is knowable, how confident they feel in their assessments, and how they interpret the adversary’s intentions. In this sense, the strategic impact of AI lies partly in its effect on confidence. Managing that gap between perceived and actual certainty may become one of the central challenges of deterrence in the coming decades.

What makes ‘Deterrence under Uncertainty’ especially worth reading today is its unusual combination of technical seriousness and conceptual restraint. It does not promise a new grand theory or a definitive solution to nuclear rivalry. Instead, it offers something arguably more valuable: a disciplined way of thinking about emerging technologies in a domain where overconfidence has always been especially dangerous. For readers interested in the evolution of nuclear deterrence in a digitally saturated strategic environment, it provides a grounded starting point.

At the very end of the book, Geist reflects on what it would actually mean to ‘win’ a nuclear war: ‘Certain nuclear wars are potentially winnable, but generally only against adversaries who are exceptionally weak or incompetent.’ The author does not spell out a precise vision of such a victory, but the implication is clear enough; such a result would likely depend on the other side failing to retaliate effectively because its forces, command systems, or decision-making break down under pressure. Victory, in such a scenario, depends less on one’s own brilliance than on the other side’s failure. 

The implication is hard to ignore. When states behave as though nuclear victory is conceivable, others inevitably interpret this as a judgment about their own weakness. They assume they are being treated as potential pushovers. In a world where survival depends on credible retaliation, few signals are more destabilising.