Warbot 2.0: Reflections on the fast-changing world of AI in national security
In 2021, when I wrote ‘I, Warbot’, I was struck by the creative potential of Artificial Intelligence (AI) and by broader connections between creativity and warfare. The three years since have turbo-charged developments, opening new questions about how advances in AI could fundamentally alter the strategy and logic of conflict.
Back then, I was playing with OpenAI’s GPT-2, a new tool you could use to write the next paragraph of a New Yorker article in a reasonably convincing fashion. If you didn’t know it was a machine, you might be fooled into thinking the author was a human. That’s the famous ‘Turing Test’, or imitation game, first suggested by the polymath inventor of modern computing, Alan Turing. This, I thought, was radical.
Most of the thinking about AI in national security circles then (and, indeed, now) focussed on more prosaic matters. Could you use AI to crunch through vast mountains of intelligence data, looking for tiny clues? Could you make a fully automated weapon that could find and kill targets entirely without human involvement? What would the ethical implications of that be?
I wrote about these things too in ‘I, Warbot’. Clearly, they are hugely important, with the potential to reshape the ways societies think about conflict and armed forces. They’ll demand new tactics, new organisations, as well as new weapon systems. AI might even play a part in the design of weapons and the concepts through which they are employed.
The conduct of war, what Clausewitz called its ‘grammar’, would be in flux in a new, automated era. The repercussions of certain behaviours would become unclear. What happens when an adversary captures an uncrewed ship or indulges in blisteringly quick automated hacking? Militaries will need different skills to harness these technical breakthroughs, including new leadership abilities. There will likely be resulting shifts, perhaps dramatic ones, in the balance of power between states.
AI becomes a strategist
These are important developments. But the changes I was just beginning to see were much more profound—changes with the potential to shift strategy, policy, and more.
Strategy is higher-level thinking about war—connecting the goals we want (or think we want, at least) with the tools at our disposal. Strategy is about how societies imagine the future and create ways to achieve it. Strategists need imagination, insight, and the ability to cope with uncertainty. Unsurprisingly, strategy development can thus be stressful and emotional.
All in all, computers have not been good at strategic thinking. Their strengths, traditionally, lie in more structured domains, where their superior computing power can crunch through huge datasets, seeking and exploiting patterns. ‘Brute force’ computing power has delivered spectacular results in narrow domains like video games. But strategy is a different order of complexity.
One of the central problems in strategy is ‘mind-reading’—figuring out what other motivated, intelligent agents want to do. Are they allies or adversaries? How far can you trust them? It is a distinctively human skill. You can get a very long way without ‘mind-reading’ in the structured universes of board games, and even in less structured environments like poker, with its interplay of chance and skill. AI already outplays humans in both, drawing on its unfailing memory for past encounters and its ability to search far ahead through possible moves.
But in the mess of real-world strategy, actors need to chart unstructured territories. Human ‘mind-reading’ is rich and multifaceted, with emotional dimensions that machines have previously lacked. Traits such as cognitive empathy—the ability to mentally inhabit someone else’s mind and see things from their perspective—become essential.
Even so, it’s evident that today’s language models, like GPT-4 and Gemini, have some of the skills needed. They demonstrate decent ‘theory of mind’ abilities and, relatedly, the capacity to deceive deliberately. That makes them more sophisticated potential strategists, and that is cause for some alarm. Can we control AI like that?
Science-fiction and beyond
My title, ‘I, Warbot’, deliberately riffed on Asimov’s work. His famous ‘Laws of Robotics’ presented a challenge: robots were prohibited from harming humans or themselves. That simply doesn’t work in conflict, where AI will most certainly be employed deliberately to harm. But his second rule was key to me and centred on this crucial territory of ‘mind-reading’: a robot must obey orders given to it. To do that, the machine must understand humans. Like Asimov, I wanted to imagine machines that could gauge our intentions and try to satisfy them.
This is sometimes known as the ‘alignment problem’—how can we ensure these super-powerful algorithms are attuned to our wishes? It’s certainly not easy—often, we don’t know what we want or (harder still) what our future selves will want. What chance do machines have of interpreting our messy, sometimes conflicting goals?
Alignment is a mighty challenge, but at least language models, with their emergent ‘mind-reading’ insights about the perspectives of other agents, are in the game. And this ‘theory of mind’ ability is a double-edged sword—for them, just as it is for us. Proficient mind-readers can better understand the intentions of others, for good and for ill.
Where might AI go next? Mind-reading abilities might improve considerably as language models inevitably scale over the coming years. But a step change in AI might require new approaches altogether. Demis Hassabis, the visionary founder of DeepMind, thinks that a hybrid approach is needed, one that brings more robust reasoning abilities to language models and that offsets their tendency to ‘hallucinate’ nonsense or misleading outputs. Perhaps.
Another possible avenue is to broaden from prose. Transformers, the general type underpinning language models, are good at that, but language is just data. Could transformers interpret our tone of voice? Perhaps even model body language? Let’s be even more futuristic here: what about pheromones? A richer, multimodal way of understanding humans might ensue, getting to deeper questions of intentions.
We now get to something even more speculative I touched on in ‘I, Warbot’: the prospect of living machines, biocomputing, mind-merges, chimeras, and other exotica from the boundaries of science and science fiction. What strategic insights might they unleash? What would such ‘living machines’ want? Living imbues us with deep motivations—to survive, to reproduce. Would biological machines share these? Would they feel emotions like ours? Would they be unambiguously our servants?
Aficionados like to talk about building ‘Artificial General Intelligence’, or ‘AGI’, which tends to mean creating machines that think like we do. Yet human intelligence really isn’t general at all. We experience only a slice of ‘reality’, limited by our sensory organs. And then we fit all that information into a useful internal model—useful for us, that is, lumbering around in our human bodies and our human groups.
If we’ve one claim to distinctiveness, it’s our uniquely intense social intelligence, replete with language and empathy. If machines can manifest that, we’ll have unleashed a powerful and entirely novel force in our strategic affairs. That is radical but still only a tiny sliver in the overall space of possible artificial intelligences. What are the implications of machines elsewhere in that wider territory? Perhaps it’s time for a sequel!