Deception can enable private-sector initiative persistence

Reports (now contested) that the Trump administration has directed key agencies to suspend cyber operations against Russia have sparked new debates about the importance of initiative persistence in the cyber domain. Initiative persistence – the strategic principle that an actor must operate persistently in cyberspace to set the conditions of security in their favour – is central to US military thinking.
As a strategy, initiative persistence emerged from the academic body of work known as cyber persistence theory. The appointment of leading proponents of that theory into prominent cyber roles in the new administration suggests that it is likely to become further entrenched in the US approach to cyber conflict.
However, while cyber persistence theory has been influential among policymakers and government practitioners in the US and beyond, it has been less widely adopted by companies and other private-sector organisations.
There have been longstanding calls for organisations to adopt a more proactive approach to cybersecurity. Nonetheless, whenever a more active approach is discussed, there are repeated expressions of concern over the potential downsides and risks. As a result, passive, compliance-based approaches that cede the initiative to adversaries remain the predominant response.
Broadening our understanding of ‘cyber defence’ to include actions that aim to alter adversaries’ mental state could change this dynamic.
Spaces and spectrums
Concern over the risks inherent in a more active approach is driven by a particular conceptualisation of cyber conflict. Two underlying metaphors are important here.
The first is a spatial understanding of cyber activities as lying on a spectrum, ranging from defensive activities at one end to offensive activities at the other, with more active forms of cyber defence occupying a grey space in between. The second is the portrayal (as set out in US military joint publications) of the ‘cyber environment’ as divided into friendly (blue), neutral (grey), and hostile (red) spaces.
These spatial metaphors align with a conception of cyber conflict that distinguishes between offensive and defensive cyber operations. Offensive cyber operations are understood to take place in the adversary’s red space and involve the disruption of adversary technology. Defensive cyber operations, meanwhile, take place in the protected ‘blue space’, potentially moving into the grey zone as they become more active.
These two underlying metaphors have fostered a narrow view of what it is to be ‘active’ in cyber space, equating it with actions outside one’s networks and the disruption of an adversary’s technology (the dreaded ‘hack back’). This understanding has forestalled efforts to promote a more active approach by defenders.
Focus on effects
As Marcus Willett has argued, many actions described as ‘offensive cyber’ do not take place on adversaries’ networks and are not aimed at disrupting technology. Willett contends that instead of labelling activities as offensive or defensive cyber operations, we should simply refer to them as ‘cyber operations’ – any actions in cyberspace meant to create effects.
With this understanding, a cyber operation might be intended to disrupt technology. However, it might also seek to affect a person’s mental state or their understanding of the world. The UK National Cyber Force’s (NCF) clearest public statement on its operations emphasises the value of what it terms ‘cognitive effect’.
For the NCF, cyber operations can be used to affect an adversaries’ sensemaking or information processing capabilities, their ability to communicate, or their confidence in their systems – all examples of cognitive effect. Cyber operations aimed at cognitive effect seek to ‘change adversary behaviour by exploiting their reliance on digital technology’.
Incorporating cognitive effects into cyber operations opens the door to influencing adversaries without disrupting their technology, or even without leaving one’s own networks. This provides an avenue for the private sector to take the initiative against adversaries in cyberspace.
Persistent (adversary) engagement
An active approach to cyber defence does not have to involve retaliatory, disruptive attacks on adversary systems. Nor does it imply that private actors should attempt deterrence through the threat of punishment. Rather, companies could adopt an active approach by making greater use of techniques variously described as deception for cyber defence, cognitive disruptive operations, and adversary engagement.
The goal, following the NCF, would be to exploit the adversary’s reliance on digital technology to shape their behaviour. Two examples give an indication of how companies could generate cognitive effect on adversaries through deception.
First, an organisation might lead adversaries to believe that its networks include deceptive artefacts such as honeypots and honeytokens. Inducing this belief could be as simple as posting a warning about the presence of deception in a protected system.
One prominent study found that some red teamers (experts hired to breach networks and test defences) experienced heightened self-doubt and confusion when informed that deception was present on a target network, even when this was not true. Deceptive claims about deception therefore created cognitive effects on the adversary.
Second, an organisation might use deception to contest an adversary’s activities. Telecoms provider O2 has showcased a system for contesting the actions of criminal groups engaged in phone scams. The system uses an AI ‘Granny’ designed to waste the time of fraudsters who dial ‘her’ number by providing deliberately tedious and evasive responses to their attempts at social engineering. Recordings released by the company reveal fraudsters becoming audibly angry with the automated system – demonstrating that cognitive effect has been achieved.
It is difficult to fit the AI Granny into the spatial metaphors for conflictual activities in cyberspace; it is not a tool that damages or disrupts the adversary’s network in the red space, but nor is it obviously in the ‘blue’ space of defence. The AI Granny is best understood within a cyber operations framework that combines cyber persistence theory and cognitive effects.
Rather than being strictly offensive or defensive, it exemplifies a cyber operation designed to frustrate and unsettle adversaries (cognitive effect), actively contesting their ability to achieve their objectives in cyberspace (initiative persistence).
Taking the initiative
Cyber persistence theory challenges the widespread idea that attackers have a systemic advantage in cyberspace. It is the passive mindset among ‘defenders’ that sustains the appearance of systemic advantage for persistent adversaries.
As adversaries continue to inflict costs and disruption through ransomware and other forms of cyber-enabled crime, pre-positioning on infrastructure, and intellectual property theft, governments must set a strategic direction for a whole-of-society approach. The barriers to this are not primarily technical, but political, legal, and cultural.
Governments will and should lead the way in some aspects of contesting adversary activity, including through conducting cyber operations in adversary networks. However, there is more than one way to be active in cyberspace.
Outdated understandings of attack and defence in cyberspace are leading organisations to cede the initiative to their adversaries. An understanding of cyber operations’ potential for cognitive effect would empower companies to act in line with the tenets of cyber persistence theory without having to engage in activities that are rightly the preserve of state actors.
Organisations can regain the initiative by persistently using deception and related techniques to achieve cognitive effects against adversaries. Rather than worrying about ‘red space’, the private sector should be getting into the adversary’s headspace.