In the wake of Donald Trump’s dramatic cuts to overseas programmes, European heavyweights like the United Kingdom and Germany have followed suit. Budgets for cybersecurity capacity building (CCB) may also be on the line. With scant compelling evidence of the impact of CCB, advocates will have to fight harder – and smarter – to justify continued funding.
At the same time, the global threat posed by cyber criminals and other malicious actors is evolving and intensifying. All responsible nations should therefore aim to impose greater friction and cost on nefarious actors, raising the overall security bar.
Cybersecurity capacity building is an important pillar of the international response to these cyber threats. CCB programmes are designed to boost the cyber defences of those countries most vulnerable to online threats, with interventions covering a large spectrum of activities and capabilities. At the strategic level, governments fund projects to draft legislation, strategy, and policy, while others aim to increase diversity within the cyber workforce, develop curricula, and build incident response capabilities.
A challenging funding environment, increasing and diversifying threats, and budgetary challenges all mean that cybersecurity capacity building will have to both adapt and justify itself against competing policy initiatives and government priorities. This will require creating and sustaining an evidence base that robustly links interventions to impacts, enabling organisations to focus resources on the most effective, best-value activities.
But does it work?
Unfortunately, the evidence for cybersecurity capacity building impact and effectiveness is currently weak. Evaluation practice is still nascent, and, with a few notable exceptions, current practice emphasises efficiency over effectiveness. For example, in education and training initiatives, data is generally limited to attendance and participant feedback, with self-reported intent serving as a (poor) proxy for real-world behaviour change. Compared to other policy areas, such as international development or education, there are relatively few examples of longitudinal studies that track the change in behaviour as a result of CCB initiatives. Where these do exist, results are limited at best – or non-existent at worst: for example, awareness programmes that demonstrate no change in behaviour or data breach legislation that doesn’t impact breach prevalence or magnitude.

Evaluation challenges
The evaluation challenges are clear. Data is the first significant hurdle. Consistent, national-level data that enable cross-country comparisons over time do not currently exist in a format that can usefully track longer-term cybersecurity impacts. Potential indicators, for example around intrusions or harm, often rely on data that is inconsistent, sensitive, or proprietary. The collection of programme-focused, longitudinal primary data – such as changes in phishing click-rate over time or the reduction in attacks that results from more capable incident response – is not yet a priority in cybersecurity capacity building budgeting. Neither is the resourcing of the analytical capabilities required to discern useful data from the vast amount of available data.
Systemic issues are the second key challenge. Outcomes often occur long after programme completion, and depend on a multitude of factors outside the purview of specific intervention activity. As a simple example, employment following graduation from a cybersecurity education scheme depends not only on the quality of training, but also on the quality of the candidate and the availability of jobs.
Another challenge in cybersecurity, when compared with other policy areas like education or development, is the tracking of an absence, rather than the presence, of an event. Demonstrating that a programme has resulted in the prevention of an attack or an intrusion is clearly problematic.
All these factors, plus the constantly shifting technological, geopolitical, and threat-driven environment, make CCB impact evaluation arguably more challenging than other policy areas. But just as evaluation methodology and practice have evolved in other policy areas to create a robust evidence base, so must evaluation in cybersecurity capacity building adapt in order to inform policy decisions.
Deploying the right evaluation methodologies is the first step in overcoming such challenges.
Evaluation methodology
Programme theory, which encompasses the use of theories of change, causal pathways, and results frameworks, has been successfully deployed as a framework for evaluation in other policy areas, and should similarly be the basis for cybersecurity capacity building evaluation. But mapping the cyber ecosystem and creating realistic logic models as a foundation for evaluation is no small undertaking. As mentioned, the cyber ecosystem is a highly complex environment with a vast array of stakeholders that arguably includes the majority of humanity.
Additionally, the use of such methodology comes with risk. Oversimplified linear models that rely on untested assumptions, overestimate the effect of programming activity, or fail to accurately map causal relationships could lead to inaccurate evaluation conclusions and poor policy decisions. Programmes that use models linking the arrest of cybercriminals and cyber deterrence through cost imposition presume that arrests deter other criminals. Accepting such assumptions may lead to a focus purely on arrests, which may divert funding away from other, potentially more impactful activities, like disruption of darknet markets.
But the advantages are readily apparent. Cybersecurity capacity building rests on a bed of assumptions that theories of change can identify, test, and mitigate. Dynamic non-linear models can capture key aspects of complex environments in sufficient detail to allow for causal pathways to be mapped and, ideally, measured. Such mapping and measurement, based on bespoke, consistent, and comparable data, is the key to successful evaluation.
Shifting mindsets
Evidence-based programming and interventions should be the future of cybersecurity capacity building. However, this will require a shift in both practitioners’ mindsets and the underpinning narrative around evaluation. Evaluation should not be considered in terms of risk, either to the reputation of implementers or to the budget of donors. Even worse, it must not be seen in terms of compliance-driven box ticking or externally imposed homework marking. Instead, impact evaluation should be seen as a strategic opportunity. Policymakers should see it as an opportunity to invest in the long-term future of the relatively new area of cybersecurity capacity building. Programming should be seen as an iterative process of learning and adaptation, with a focus on demonstrating effective behaviour change over time, rather than being limited to evaluating immediate post-intervention efficiency.
What next?
To achieve better evaluation in cybersecurity capacity building, a number of steps can be taken. Donors should invest time, expertise, and money in research. They should also insist on the implementation of best practice evaluation as a funding prerequisite. Such steps will help increase understanding of programme contributions and demonstrate, through data, causal relationships between programming interventions, behaviour change, and real-world effects. Academia could be more engaged, facilitating experimental approaches and longitudinal trials and increasing data analysis in cybersecurity capacity building, in the same way it does in other policy areas. And industry has a strong role to play in making data available to help validate or challenge the assumptions upon which CCB is founded.
Without these changes, donors, implementors, and recipients risk squandering valuable time and resources on interventions that deliver little to no long-term transformational impact. And the cybersecurity capacity building community will miss a unique opportunity to create an evidence base that can shape policy and programmes which are effective against the multitude of immediate and longer-term challenges that we face in creating a more free, open, peaceful, and secure global cyber ecosystem.






