The AI question in cybersecurity

The question today is not if AI will transform cybersecurity, but how much more, how fast, and how it will affect Europe specifically.
AI does not yet outperform humans in all cybersecurity tasks. The best human attackers and defenders remain superior to the best AI. However, AI’s ability to scale, combined with automation, massification of tools, and cost reduction, drives significant impact. Whether AI is better than the best human is irrelevant – is it better than the majority of defenders?
This scaling power defines AI’s role in benign and malicious cybersecurity. The sheer volume of content that can be created, analysed, or discarded – from malware binaries to propaganda – is unprecedented.
To effectively embrace AI for cybersecurity, we must learn from the past, adapt to current realities, and anticipate how AI will shape Europe’s future.
AI has changed cybersecurity
Contrary to popular belief, most malicious activities involving AI still rely on basic automation, such as automated malware code obfuscation. Automation means faster and more frequent attacks, but it doesn’t necessarily mean AI is being used. Cybercriminals use both automation and human expertise extensively, from malware-as-a-service operations to fake webpage creation.
AI assists in creating malware, but no malware employs AI for internal decision making or learning. OpenAI’s October 2024 report states, ‘Threat actors continue to experiment with our models, but there is no evidence of breakthroughs in creating new malware or viral audiences.’ AI’s role in attacks remains limited.
Why hasn’t machine learning been widely used in attacks? Antivirus tools are effective broadly but are easy to evade for specific attacks. Evasion techniques can be as simple as changing malware code to avoid detection by signatures. A common technique is to create copies of the same malware with small variations. Since the variations are not known by the antivirus, it takes some hours until clients are updated. Those hours are enough to infect thousands of victims.
Most attackers are not playing the ‘cybersecurity game’ but only want to earn money. Automation increases profits, and AI is not currently necessary to do the job. When detected, they simply pivot to the next target. This idea also supports the hypothesis that job security may help deter the transition into cybercrime.
Some advanced attackers, backed by state resources, aim to avoid detection and retain long-term access to the victim. Yet, these operations remain human-driven, with no AI-based advanced persistent threats reported to date.
AI’s most impactful use in attacks lies in propaganda and manipulation. However, from bots to phishing emails, humans still design campaigns and control content distribution. Mature or stolen accounts are essential for credibility, while bots alone wield limited influence. Studies do confirm that AI-driven misinformation manipulates decision-making capabilities and that ‘the perception of truth increases when misinformation is repeated.’
Cybersecurity firms effectively leverage defensive AI to process data, detect anomalies, and explain results. However, given the difficulty of evaluating and comparing cybersecurity solutions, the actual degree of improvement in detection due to AI remains unclear.
This ambiguity highlights the lemon market issue: tools are difficult to evaluate, possibly reducing incentives for genuine innovation. Nonetheless, AI has generally improved cybersecurity tools in many aspects. Notable strides include AI’s increasing performance in difficult cybersecurity puzzles (like ‘capture-the-flag’ challenges), where AI tools solve problems once only possible for humans.
AI’s impact on research is profound, enabling faster analysis and uncovering complex relationships. AI has been used in binary reversing, autonomous strategic agents, intrusion detection systems, and attacker behaviour modeling. Moreover, attacks against AI systems themselves have proliferated.
The cybersecurity future
Europe’s cybersecurity industry needs more professionals. However, AI tools enhance productivity, potentially reducing junior job opportunities. Companies benefit from consolidating tasks, while smaller, emerging firms may also benefit as displaced individuals come to work for them.
AI regulations may also negatively affect third-party hiring of cybersecurity operations, since non-European security providers can access AI tools under different regulations. Therefore, a European company that is strictly controlled in its use of AI may hire a third party outside Europe for sensitive AI operations. Overly strict AI regulations may drive market migration outside Europe and foster unregulated underground AI markets for otherwise legal operations.
New technology usually reduces some positions over time, and I expect that at least 10% of cybersecurity junior roles will be affected by AI in the near future, leading to an increase in the expected knowledge of starting positions. This gap in the required expertise should be filled by fast-adapting universities, but in practice is mostly filled by informal education.
AI disrupts traditional education as students generate dissertations and teachers use automated content and AI-assisted examinations. Rapid advancement in AI technology has outpaced university curricula and teacher expertise, while students acquire essential AI skills through informal online learning.
To prepare students, universities should emphasise critical thinking, problem-solving, and adaptability over specific tools or libraries. This ensures graduates can quickly adjust to evolving challenges. Since teachers don’t know what problems students will have to solve in the future, they need to teach them to think and learn quickly.
AI will amplify trends in high-quality propaganda and large-scale manipulation campaigns. Conflicts like the Ukraine war have inadvertently increased AI risks due to its limited use during such conflicts. AI propaganda does not need to be perfect – the goal of misinformation is not persuasion but confusion.
Technical attacks using AI are unlikely to introduce groundbreaking new methods soon because defenders are not forcing attackers to use AI. Regulations limiting large language model (LLM) misuse for malicious code creation will continue as long as attackers depend on open-source or corporate models. Regulation will effectively slow down attackers until they can create their own models.
Autonomous attacking AI agents have already been demonstrated to work in laboratories, with AI independently picking targets, planning attacks, and deciding to carry them out. If this technology is adopted, many more high-quality, automated attacks may occur. Imagine AI agents compromising companies and installing ransomware, entirely free from human input.
AI’s defensive role will expand, providing better data analysis, faster decisions, and improved explanations. Automation will increasingly manage cybersecurity pipelines, transitioning from aiding humans to independent decision-making.
Reliance on AI also introduces new vulnerabilities. While companies adopt defensive AI systems, securing these new AI systems remains challenging and poses unique new risks. These risks, however, are less crucial for offensive AI tools since they have less to lose.
Since LLMs will be used for many tasks by companies, they will be successfully attacked. An important line of defence will be for Europe to ensure LLM diversity by encouraging funding for the creation of many different base LLM models. When (and not if) vulnerabilities are found in LLMs, attackers will abuse them in many companies. This risk can be mitigated by not having a monoculture of LLMs. Avoiding creating one unique European LLM may ensure resilience and reduce the attack surface.
European AI regulations will not deter external attackers, who can choose unrestricted models. Instead, restrictive policies may hinder defence capabilities. Future defensive AI agents may require uncensored LLMs for effective operation.
AI faces evaluation challenges, fostering a ‘lemon market’ of opaque, untested solutions. A comprehensive, unbiased comparison framework is essential to address this.
Recommendations for the EU
To strengthen Europe’s cybersecurity ecosystem, the EU must implement changes to education and policy while also enacting an AI strategy that is less restrictive and encourages diversity. Education must prioritise critical thinking and adaptability over tool-specific training. Resources should be directed to research, LLM development, and security testing. Overly restrictive AI regulations must be avoided to ensure defences are not weakened. Finally, AI strategy should promote LLM model diversity to reduce vulnerabilities and ensure resilience.
This essay was awarded 5th place in the AI-Cybersecurity Essay Prize Competition 2024-2025, organised in partnership between Binding Hook and the Munich Security Conference (MSC), and sponsored by Google.