Submit your essay to the AI-Cybersecurity Essay Prize Competition by January 2, 2025.
The AI-Cybersecurity Essay Prize Competition

Warbot 2.0: Reflections on the fast-changing world of AI in national security

Kenneth Payne reflects on his book, ‘I, Warbot’ and the advances in AI since its publication in 2021
Main Top Image
Image created with the assistance of Dall-E 2

In 2021, when I wrote ‘I, Warbot’, I was struck by the creative potential of Artificial Intelligence (AI) and by broader connections between creativity and warfare. The three years since have turbo-charged developments, opening new questions about how advances in AI could fundamentally alter the strategy and logic of conflict.

Back then, I was playing with OpenAI’s GPT-2, a new tool you could use to write the next paragraph of a New Yorker article in a reasonably convincing fashion. If you didn’t know it was a machine, you might be fooled into thinking the author was a human. That’s the famous ‘Turing Test’, or imitation game, first suggested by the polymath inventor of modern computing, Alan Turing. This, I thought, was radical.

Most of the thinking about AI in national security circles then (and, indeed, now) focussed on more prosaic matters. Could you use AI to crunch through vast mountains of intelligence data, looking for tiny clues? Could you make a fully automated weapon that could find and kill targets entirely without human involvement? What would the ethical implications of that be? 

I wrote about these things too in ‘I, Warbot’. Clearly, they are hugely important, with the potential to reshape the ways societies think about conflict and armed forces. They’ll demand new tactics, new organisations, as well as new weapon systems. AI might even play a part in the design of weapons and the concepts through which they are employed. 

The conduct of war, what Clausewitz called its ‘grammar’, would be in flux in a new, automated era. The repercussions of certain behaviours would become unclear. What happens when an adversary captures an uncrewed ship or indulges in blisteringly quick automated hacking? Militaries will need different skills to harness these technical breakthroughs, including new leadership abilities. There will likely be resulting shifts, perhaps dramatic ones, in the balance of power between states.

AI becomes a strategist

These are important developments. But the changes I was just beginning to see were much more profound—changes with the potential to shift strategy, policy, and more. 

Strategy is higher-level thinking about war—connecting the goals we want (or think we want, at least) with the tools at our disposal. Strategy is about how societies imagine the future and create ways to achieve it. Strategists need imagination, insight, and the ability to cope with uncertainty. Unsurprisingly, strategy development can thus be stressful and emotional. 

All in all, computers have not been good at strategic thinking. Their strengths, traditionally, lie in more structured domains, where their superior computing power can crunch through huge datasets, seeking and exploiting patterns. ‘Brute force’ computing power has delivered spectacular results in narrow domains like video games. But strategy is a different order of complexity.

One of the central problems in strategy is ‘mind-reading’—figuring out what other motivated, intelligent agents want to do. Are they allies or adversaries? How far can you trust them? It is a distinctively human skill. You can get a very long way without ‘mind-reading’ in the structured universes of board games, and even in less structured environments like poker, with its interplay of chance and skill. AI already outplays humans in both, drawing on its unfailing memory for past encounters and its ability to search far ahead through possible moves. 

But in the mess of real-world strategy, actors need to chart unstructured territories. Human ‘mind-reading’ is rich and multifaceted, with emotional dimensions that machines have previously lacked. Traits such as cognitive empathy—the ability to mentally inhabit someone else’s mind and see things from their perspective—become essential. 

Even so, it’s evident that today’s language models, like GPT-4 and Gemini, have some of the skills needed. They demonstrate decent ‘theory of mind’ abilities and, relatedly, the capacity to deceive deliberately. That makes them more sophisticated potential strategists, and that is cause for some alarm. Can we control AI like that? 

Science-fiction and beyond

My title, ‘I, Warbot’, deliberately riffed on Asimov’s work. His famous ‘Laws of Robotics’ presented a challenge: robots were prohibited from harming humans or themselves. That simply doesn’t work in conflict, where AI will most certainly be employed deliberately to harm. But his second rule was key to me and centred on this crucial territory of ‘mind-reading’: a robot must obey orders given to it. To do that, the machine must understand humans. Like Asimov, I wanted to imagine machines that could gauge our intentions and try to satisfy them. 

This is sometimes known as the ‘alignment problem’—how can we ensure these super-powerful algorithms are attuned to our wishes? It’s certainly not easy—often, we don’t know what we want or (harder still) what our future selves will want. What chance do machines have of interpreting our messy, sometimes conflicting goals? 

Alignment is a mighty challenge, but at least language models, with their emergent ‘mind-reading’ insights about the perspectives of other agents, are in the game. And this ‘theory of mind’ ability is a double-edged sword—for them, just as it is for us. Proficient mind-readers can better understand the intentions of others, for good and for ill. 

Where might AI go next? Mind-reading abilities might improve considerably as language models inevitably scale over the coming years. But a step change in AI might require new approaches altogether. Demis Hassabis, the visionary founder of DeepMind, thinks that a hybrid approach is needed, one that brings more robust reasoning abilities to language models and that offsets their tendency to ‘hallucinate’ nonsense or misleading outputs. Perhaps. 

Another possible avenue is to broaden from prose. Transformers, the general type underpinning language models, are good at that, but language is just data. Could transformers interpret our tone of voice? Perhaps even model body language? Let’s be even more futuristic here: what about pheromones? A richer, multimodal way of understanding humans might ensue, getting to deeper questions of intentions. 

We now get to something even more speculative I touched on in ‘I, Warbot’: the prospect of living machines, biocomputing, mind-merges, chimeras, and other exotica from the boundaries of science and science fiction. What strategic insights might they unleash? What would such ‘living machines’ want? Living imbues us with deep motivations—to survive, to reproduce.  Would biological machines share these? Would they feel emotions like ours? Would they be unambiguously our servants? 

Aficionados like to talk about building ‘Artificial General Intelligence’, or ‘AGI’, which tends to mean creating machines that think like we do. Yet human intelligence really isn’t general at all. We experience only a slice of ‘reality’, limited by our sensory organs. And then we fit all that information into a useful internal model—useful for us, that is, lumbering around in our human bodies and our human groups. 

If we’ve one claim to distinctiveness, it’s our uniquely intense social intelligence, replete with language and empathy. If machines can manifest that, we’ll have unleashed a powerful and entirely novel force in our strategic affairs. That is radical but still only a tiny sliver in the overall space of possible artificial intelligences. What are the implications of machines elsewhere in that wider territory? Perhaps it’s time for a sequel! 

Terms and Conditions for the AI-Cybersecurity Essay Prize Competition

Introduction

The AI-Cybersecurity Essay Prize Competition (the “Competition”) is organized by the European Cyber Conflict Research Incubator (“ECCRI CIC”) in partnership with the Munich Security Conference (“MSC”). It is sponsored by Google (the “Sponsor”). By entering the Competition, participants agree to these Terms and Conditions (T&Cs).

Eligibility

The Competition is open to individuals worldwide who are experts in the fields of cybersecurity and artificial intelligence (“AI”). Participants must ensure that their participation complies with local laws and regulations.

Submission Guidelines

Essays must address the question: “How will Artificial Intelligence change cybersecurity, and what are the implications for Europe? Discuss potential strategies that policymakers can adopt to navigate these changes.”

Submissions must be original, unpublished works between 800-1200 words, excluding footnotes but including hyperlinks for references.

Essays must be submitted by 2 January 2025, 00:00 am CET., through the official submission portal provided by ECCRI CIC.

Only single-authored essays are accepted. Co-authored submissions will not be considered.

Participants are responsible for ensuring their submissions do not infringe upon the intellectual property rights of third parties.

Judging and Awards

Essays will be judged based on insightfulness, relevance, originality, clarity, and evidence by a review board comprising distinguished figures from academia, industry, and government.

The decision of the review board is final and binding in all matters related to the Competition.

Prizes are as follows: 1st Place: €10,000; Runner-Up: €5,000; 3rd Place: €2,500; 4th-5th Places: €1,000 each. The winner will also be invited to attend The Munich Security Conference

Intellectual Property Rights

The author retains ownership of the submitted essay.

By submitting the essay, the author grants ECCRI CIC exclusive, royalty-free rights to use, reproduce, publish, distribute, and display the essay for purposes related to the Competition, including but not limited to educational, promotional, and research-related activities.

The author represents, warrants, and agrees that no essay submitted as part of the essay prize competition violates or infringes upon the rights of any third party, including copyright, trademark, privacy, publicity, or other personal or proprietary rights, breaches, or conflicts with any obligation, such as a confidentiality obligation, or contains libellous, defamatory, or otherwise unlawful material.

The author agrees that the organizers can use your name (or your pseudonym) and an image of you in association with your essay for purposes of publicity, promotion and any other activity related to the exercise of its rights under these Terms.

The organizers may remove any essay-related content from its platforms at any time and without explanation.

The organizers may block contributions from particular email or IP addresses without notice or explanation.

The organizers may enable advertising on its platforms and associated social media accounts, including in connection with the display of your essay. The organizers may also use your Material to promote its products and services.

The organizers may, at its sole discretion, categorise Material, whether by means of ranking according to popularity or by any other criteria.

Data Protection

Personal information collected in connection with the Competition will be processed in accordance with Virtual Routes’ Privacy Policy. Participants agree to the collection, processing, and storage of their personal data for the purposes of the Competition.

Liability and Indemnity

ECCRI CIC, MSC, and the Sponsor will not be liable for any damages arising from participation in the Competition, except where prohibited by law.

Participants agree to indemnify ECCRI CIC, MSC, and the Sponsor against any claims, damages, or losses resulting from a breach of these T&Cs.

General Conditions

ECCRI CIC reserves the right to cancel, suspend, or modify the Competition or these T&Cs if fraud, technical failures, or any other factor beyond ECCRI CIC’s reasonable control impairs the integrity or proper functioning of the Competition, as determined by ECCRI CIC in its sole discretion.

Any attempt by any person to deliberately undermine the legitimate operation of the Competition may be a violation of criminal and civil law, and, should such an attempt be made, ECCRI CIC reserves the right to seek damages from any such person to the fullest extent permitted by law.

Governing Law

These Terms and Conditions are governed by the laws of the United Kingdom, without regard to its conflict of law principles. Any dispute arising out of or in connection with these Terms and Conditions, including any question regarding its existence, validity, or termination, shall be referred to and finally resolved by the courts of the United Kingdom. The participants agree to submit to the exclusive jurisdiction of the courts located in the United Kingdom for the resolution of all disputes arising from or related to these Terms and Conditions or the Competition.