The AI question in cybersecurity

AI is rewriting cybersecurity, reshaping attacks, defences, and jobs. Europe must act fast – adopting smarter policies, boosting AI diversity, and rethinking education – to stay ahead
Main Top Image
Visual created by Martin Rästa

The question today is not if AI will transform cybersecurity, but how much more, how fast, and how it will affect Europe specifically.

AI does not yet outperform humans in all cybersecurity tasks. The best human attackers and defenders remain superior to the best AI. However, AI’s ability to scale, combined with automation, massification of tools, and cost reduction, drives significant impact. Whether AI is better than the best human is irrelevant – is it better than the majority of defenders?

This scaling power defines AI’s role in benign and malicious cybersecurity. The sheer volume of content that can be created, analysed, or discarded – from malware binaries to propaganda – is unprecedented.

To effectively embrace AI for cybersecurity, we must learn from the past, adapt to current realities, and anticipate how AI will shape Europe’s future.

AI has changed cybersecurity

Contrary to popular belief, most malicious activities involving AI still rely on basic automation, such as automated malware code obfuscation. Automation means faster and more frequent attacks, but it doesn’t necessarily mean AI is being used. Cybercriminals use both automation and human expertise extensively, from malware-as-a-service operations to fake webpage creation.

AI assists in creating malware, but no malware employs AI for internal decision making or learning. OpenAI’s October 2024 report states, ‘Threat actors continue to experiment with our models, but there is no evidence of breakthroughs in creating new malware or viral audiences.’ AI’s role in attacks remains limited.

Why hasn’t machine learning been widely used in attacks? Antivirus tools are effective broadly but are easy to evade for specific attacks. Evasion techniques can be as simple as changing malware code to avoid detection by signatures. A common technique is to create copies of the same malware with small variations. Since the variations are not known by the antivirus, it takes some hours until clients are updated. Those hours are enough to infect thousands of victims.

Most attackers are not playing the ‘cybersecurity game’ but only want to earn money. Automation increases profits, and AI is not currently necessary to do the job. When detected, they simply pivot to the next target. This idea also supports the hypothesis that job security may help deter the transition into cybercrime.

Some advanced attackers, backed by state resources, aim to avoid detection and retain long-term access to the victim. Yet, these operations remain human-driven, with no AI-based advanced persistent threats reported to date.

AI’s most impactful use in attacks lies in propaganda and manipulation. However, from bots to phishing emails, humans still design campaigns and control content distribution. Mature or stolen accounts are essential for credibility, while bots alone wield limited influence. Studies do confirm that AI-driven misinformation manipulates decision-making capabilities and that ‘the perception of truth increases when misinformation is repeated.’

Cybersecurity firms effectively leverage defensive AI to process data, detect anomalies, and explain results. However, given the difficulty of evaluating and comparing cybersecurity solutions, the actual degree of improvement in detection due to AI remains unclear.

This ambiguity highlights the lemon market issue: tools are difficult to evaluate, possibly reducing incentives for genuine innovation. Nonetheless, AI has generally improved cybersecurity tools in many aspects. Notable strides include AI’s increasing performance in difficult cybersecurity puzzles (like ‘capture-the-flag’ challenges), where AI tools solve problems once only possible for humans.

AI’s impact on research is profound, enabling faster analysis and uncovering complex relationships. AI has been used in binary reversing, autonomous strategic agents, intrusion detection systems, and attacker behaviour modeling. Moreover, attacks against AI systems themselves have proliferated.

The cybersecurity future

Europe’s cybersecurity industry needs more professionals. However, AI tools enhance productivity, potentially reducing junior job opportunities. Companies benefit from consolidating tasks, while smaller, emerging firms may also benefit as displaced individuals come to work for them.

AI regulations may also negatively affect third-party hiring of cybersecurity operations, since non-European security providers can access AI tools under different regulations. Therefore, a European company that is strictly controlled in its use of AI may hire a third party outside Europe for sensitive AI operations. Overly strict AI regulations may drive market migration outside Europe and foster unregulated underground AI markets for otherwise legal operations.

New technology usually reduces some positions over time, and I expect that at least 10% of cybersecurity junior roles will be affected by AI in the near future, leading to an increase in the expected knowledge of starting positions. This gap in the required expertise should be filled by fast-adapting universities, but in practice is mostly filled by informal education.

AI disrupts traditional education as students generate dissertations and teachers use automated content and AI-assisted examinations. Rapid advancement in AI technology has outpaced university curricula and teacher expertise, while students acquire essential AI skills through informal online learning.

To prepare students, universities should emphasise critical thinking, problem-solving, and adaptability over specific tools or libraries. This ensures graduates can quickly adjust to evolving challenges. Since teachers don’t know what problems students will have to solve in the future, they need to teach them to think and learn quickly.

AI will amplify trends in high-quality propaganda and large-scale manipulation campaigns. Conflicts like the Ukraine war have inadvertently increased AI risks due to its limited use during such conflicts. AI propaganda does not need to be perfect – the goal of misinformation is not persuasion but confusion.

Technical attacks using AI are unlikely to introduce groundbreaking new methods soon because defenders are not forcing attackers to use AI. Regulations limiting large language model (LLM) misuse for malicious code creation will continue as long as attackers depend on open-source or corporate models. Regulation will effectively slow down attackers until they can create their own models.

Autonomous attacking AI agents have already been demonstrated to work in laboratories, with AI independently picking targets, planning attacks, and deciding to carry them out. If this technology is adopted, many more high-quality, automated attacks may occur. Imagine AI agents compromising companies and installing ransomware, entirely free from human input.

AI’s defensive role will expand, providing better data analysis, faster decisions, and improved explanations. Automation will increasingly manage cybersecurity pipelines, transitioning from aiding humans to independent decision-making.

Reliance on AI also introduces new vulnerabilities. While companies adopt defensive AI systems, securing these new AI systems remains challenging and poses unique new risks. These risks, however, are less crucial for offensive AI tools since they have less to lose.

Since LLMs will be used for many tasks by companies, they will be successfully attacked. An important line of defence will be for Europe to ensure LLM diversity by encouraging funding for the creation of many different base LLM models. When (and not if) vulnerabilities are found in LLMs, attackers will abuse them in many companies. This risk can be mitigated by not having a monoculture of LLMs. Avoiding creating one unique European LLM may ensure resilience and reduce the attack surface.

European AI regulations will not deter external attackers, who can choose unrestricted models. Instead, restrictive policies may hinder defence capabilities. Future defensive AI agents may require uncensored LLMs for effective operation.

AI faces evaluation challenges, fostering a ‘lemon market’ of opaque, untested solutions. A comprehensive, unbiased comparison framework is essential to address this.

Recommendations for the EU

To strengthen Europe’s cybersecurity ecosystem, the EU must implement changes to education and policy while also enacting an AI strategy that is less restrictive and encourages diversity. Education must prioritise critical thinking and adaptability over tool-specific training. Resources should be directed to research, LLM development, and security testing. Overly restrictive AI regulations must be avoided to ensure defences are not weakened. Finally, AI strategy should promote LLM model diversity to reduce vulnerabilities and ensure resilience.

This essay was awarded 5th place in the AI-Cybersecurity Essay Prize Competition 2024-2025, organised in partnership between Binding Hook and the Munich Security Conference (MSC), and sponsored by Google.

Terms and Conditions for the AI-Cybersecurity Essay Prize Competition

Introduction

The AI-Cybersecurity Essay Prize Competition (the “Competition”) is organized by Virtual Routes (“Virtual Routes”) in partnership with the Munich Security Conference (“MSC”). It is sponsored by Google (the “Sponsor”). By entering the Competition, participants agree to these Terms and Conditions (T&Cs).

Eligibility

The Competition is open to individuals worldwide who are experts in the fields of cybersecurity and artificial intelligence (“AI”). Participants must ensure that their participation complies with local laws and regulations.

Submission Guidelines

Essays must address the question: “How will Artificial Intelligence change cybersecurity, and what are the implications for Europe? Discuss potential strategies that policymakers can adopt to navigate these changes.”

Submissions must be original, unpublished works between 800-1200 words, excluding footnotes but including hyperlinks for references.

Essays must be submitted by 2 January 2025, 00:00 am CET., through the official submission portal provided by Virtual Routes.

Only single-authored essays are accepted. Co-authored submissions will not be considered.

Participants are responsible for ensuring their submissions do not infringe upon the intellectual property rights of third parties.

Judging and Awards

Essays will be judged based on insightfulness, relevance, originality, clarity, and evidence by a review board comprising distinguished figures from academia, industry, and government.

The decision of the review board is final and binding in all matters related to the Competition.

Prizes are as follows: 1st Place: €10,000; Runner-Up: €5,000; 3rd Place: €2,500; 4th-5th Places: €1,000 each. The winner will also be invited to attend The Munich Security Conference

Intellectual Property Rights

The author retains ownership of the submitted essay.

By submitting the essay, the author grants Virtual Routes exclusive, royalty-free rights to use, reproduce, publish, distribute, and display the essay for purposes related to the Competition, including but not limited to educational, promotional, and research-related activities.

The author represents, warrants, and agrees that no essay submitted as part of the essay prize competition violates or infringes upon the rights of any third party, including copyright, trademark, privacy, publicity, or other personal or proprietary rights, breaches, or conflicts with any obligation, such as a confidentiality obligation, or contains libellous, defamatory, or otherwise unlawful material.

The author agrees that the organizers can use your name (or your pseudonym) and an image of you in association with your essay for purposes of publicity, promotion and any other activity related to the exercise of its rights under these Terms.

The organizers may remove any essay-related content from its platforms at any time and without explanation.

The organizers may block contributions from particular email or IP addresses without notice or explanation.

The organizers may enable advertising on its platforms and associated social media accounts, including in connection with the display of your essay. The organizers may also use your Material to promote its products and services.

The organizers may, at its sole discretion, categorise Material, whether by means of ranking according to popularity or by any other criteria.

Data Protection

Personal information collected in connection with the Competition will be processed in accordance with Virtual Routes’ Privacy Policy. Participants agree to the collection, processing, and storage of their personal data for the purposes of the Competition.

Liability and Indemnity

Virtual Routes, MSC, and the Sponsor will not be liable for any damages arising from participation in the Competition, except where prohibited by law.

Participants agree to indemnify Virtual Routes, MSC, and the Sponsor against any claims, damages, or losses resulting from a breach of these T&Cs.

General Conditions

Virtual Routes reserves the right to cancel, suspend, or modify the Competition or these T&Cs if fraud, technical failures, or any other factor beyond Virtual Routes’ reasonable control impairs the integrity or proper functioning of the Competition, as determined by Virtual Routes in its sole discretion.

Any attempt by any person to deliberately undermine the legitimate operation of the Competition may be a violation of criminal and civil law, and, should such an attempt be made, Virtual Routes reserves the right to seek damages from any such person to the fullest extent permitted by law.

Governing Law

These Terms and Conditions are governed by the laws of the United Kingdom, without regard to its conflict of law principles. Any dispute arising out of or in connection with these Terms and Conditions, including any question regarding its existence, validity, or termination, shall be referred to and finally resolved by the courts of the United Kingdom. The participants agree to submit to the exclusive jurisdiction of the courts located in the United Kingdom for the resolution of all disputes arising from or related to these Terms and Conditions or the Competition.