Submit your essay to the AI-Cybersecurity Essay Prize Competition by January 2, 2025.
The AI-Cybersecurity Essay Prize Competition

The race to adopt AI has negative cybersecurity implications

AI's allure diverts attention from fundamental cybersecurity measures and common cyberattack techniques
Main Top Image
Image created with the assistance of Midjourney

Artificial intelligence (AI), especially generative AI, has taken the world by storm and brought new attention to its potential uses in cybersecurity.

AI and influence operations

Disinformation was a critical problem even before the advent of commercial AI. It works because it manipulates natural human tendencies toward certain types of information: novel, sensational, or material that confirms prior beliefs. It also exploits broader vulnerabilities in civil society and takes advantage of social media platforms to spread and cause malign effects. 

Technology companies further aid the spread of disinformation. They are reluctant to acknowledge that their platforms enable large-scale disinformation campaigns. As such, they have failed to moderate content or even allow researchers to access their content databases. In March, Meta announced it was discontinuing its CrowdTangle disinformation tracking tool, while Google’s Alphabet has reportedly reduced its misinformation team to just one person. Under Elon Musk’s ownership, disinformation has boomed on X (formerly Twitter). He has also attempted to sue researchers tracking content across the platform. Content moderation efforts are also hampered by a lack of linguistic expertise—an issue AI could help address in the future.

Malicious actors have already begun exploiting AI for cyber-enabled influence operations, particularly in content generation and deepfake videos and photos. A prominent example was Russia’s attempt to use a deepfake of Ukrainian president Volodomyr Zelenskyy to convince Ukrainians to surrender in March 2022. Freedom House’s latest report also highlights growing government use of generative AI for influence operations to consolidate domestic control. As the technology becomes even more accessible and affordable, a much wider variety of actors are likely to utilise it for such purposes. Therefore, AI is highly likely to increase the scope, speed, and sophistication of influence operations, exacerbating the associated negative consequences. 

The vulnerability of AI itself

Much has also been said about AI’s potential to assist cybersecurity, such as anticipating Distributed Denial-of-Service (DDoS) attacks, generating cybersecurity exercise scenarios, or detecting phishing campaigns. This could be a valuable asset in the future, but the current excitement surrounding AI is encouraging companies to integrate AI capabilities into digital products and services before fully considering security risks. German carmaker Volkswagen recently released a new car with ChatGPT enabled, driven in part by consumer demand.

Without sufficient oversight and robust guidelines, consumers use AI tools unsafely. Numerous cases have already been reported of employees entering confidential client information or sensitive business data into ChatGPT and exposing it to third parties. This has already resulted in a slowdown in adopting such software and even prohibitions, such as the Microsoft Co-Pilot for the US Congress.

Attackers can also influence AI products in various ways. They can tamper with AI training data to alter its outputs in ‘data poisoning’ attacks. They can manipulate the prompts of generative AI products through ‘prompt injections’: formulating requests in a manner that makes the AI behave in an unintended way. For example, an attacker could employ a hypothetical scenario to push an AI model into divulging prohibited information or engage in illegal behaviour, such as writing malicious code

Perhaps most worryingly, AI systems themselves represent a source of vulnerability. As with other software, AI systems often depend on third-party components and libraries, making them susceptible to supply chain disruptions or malicious injection attacks. In 2023, researchers found critical vulnerabilities in several AI models. Large Language Models (LLMs) also suffer from hallucinations. That is, they will produce outputs that seem plausible but are incorrect. As LLMs are also used to facilitate software engineering by asking them to create software code, hallucinations can lead to functioning code that contains severe vulnerabilities, increasing supply chain risk. 

Generative AI systems and other machine learning systems also face reliability and fragility issues: even when given the same input, there can be considerable discrepancies in the output, making it difficult to predict future system behaviour. It is not clear why the results are different. This could be a problem if key decisions or activities are outsourced to AI, as they could result in outcomes with serious real-life consequences. In January 2021, the Dutch government resigned after it was revealed that its algorithmic-based self-learning tax fraud assessment tool wrongly flagged hundreds of people as committing benefits fraud, causing significant and long-lasting financial and emotional distress to those affected.

AI detracts from more pressing issues

One of AI’s core issues is not with it but our approach to it: focusing on AI detracts attention from less flashy but still dangerous security failings. AI projects tend to be resource intensive, which can deprioritise more pressing security concerns. Organisations may neglect basic cyber hygiene measures that could ultimately provide opportunities to threat actors. For example, IBM’s 2024 X-Force report showed that the most common way to intrude into a system was not a flashy, AI-generated attack but instead the simple use of stolen credentials to access valid employee accounts. Stolen credentials are also valuable for lateral movement once an adversary is within a network. 

The X-Force report also highlighted that email-based phishing remains an important way to enter networks. Generative AI can create convincing phishing emails in multiple languages at greater speed, scale, and sophistication, but the deception works through email. The efficacy of this tactic—with or without AI—highlights a lack of awareness of common cyberattack techniques, indicating poor cyber hygiene. 

Other common cyberattack methods, such as exploiting software vulnerabilities, underscore this point. For example, in 2023, cybercriminals exploited a vulnerability in Progress Software’s MOVEit file transfer application to access the networks of thousands of clients. Affected victims included large financial institutions, government departments and critical infrastructure providers. The MOVEit incident demonstrates not just the danger posed by infrastructure vulnerabilities but also how third-party suppliers can become sources of risk themselves. 

Zero-day vulnerabilities like MOVEit can initially be difficult to defend against, but evidence shows that threat actors routinely rely on exploiting older vulnerabilities even when patches and mitigations exist. Statistics from the US Cybersecurity and Infrastructure Agency (CISA) show that some of the most common vulnerabilities exploited in 2022 were several years old, with patches and security updates available. In 2023, vendor research from CISA showed that vulnerabilities from 2019, 2017, and even 2012 were key targets for exploitation and initial network access. This evidence suggests that patching is sporadic and that more regular maintenance cycles are needed to avoid preventable attacks. 

Rather than being subject to flashy, AI-generated cyberattacks, many organisations are compromised because they lack basic cyber hygiene. This includes employees reusing passwords and lacking awareness and training regarding phishing and other attacks that manipulate their social media and networks. Basic software update and patching cycles make an entity far more resilient, even in the age of AI. 

The cybersecurity landscape is constantly evolving, and AI is just one catalyst for this. Although the technology can bring many potential positives, as it currently stands, AI has had a detrimental impact on cybersecurity as a whole. To reverse this trend, it is crucial to prioritise safety and security and carefully consider the potential risks that come with integrating AI systems. By doing so, organisations can harness the power of AI for positive change.

Terms and Conditions for the AI-Cybersecurity Essay Prize Competition

Introduction

The AI-Cybersecurity Essay Prize Competition (the “Competition”) is organized by the European Cyber Conflict Research Incubator (“ECCRI CIC”) in partnership with the Munich Security Conference (“MSC”). It is sponsored by Google (the “Sponsor”). By entering the Competition, participants agree to these Terms and Conditions (T&Cs).

Eligibility

The Competition is open to individuals worldwide who are experts in the fields of cybersecurity and artificial intelligence (“AI”). Participants must ensure that their participation complies with local laws and regulations.

Submission Guidelines

Essays must address the question: “How will Artificial Intelligence change cybersecurity, and what are the implications for Europe? Discuss potential strategies that policymakers can adopt to navigate these changes.”

Submissions must be original, unpublished works between 800-1200 words, excluding footnotes but including hyperlinks for references.

Essays must be submitted by 2 January 2025, 00:00 am CET., through the official submission portal provided by ECCRI CIC.

Only single-authored essays are accepted. Co-authored submissions will not be considered.

Participants are responsible for ensuring their submissions do not infringe upon the intellectual property rights of third parties.

Judging and Awards

Essays will be judged based on insightfulness, relevance, originality, clarity, and evidence by a review board comprising distinguished figures from academia, industry, and government.

The decision of the review board is final and binding in all matters related to the Competition.

Prizes are as follows: 1st Place: €10,000; Runner-Up: €5,000; 3rd Place: €2,500; 4th-5th Places: €1,000 each. The winner will also be invited to attend The Munich Security Conference

Intellectual Property Rights

The author retains ownership of the submitted essay.

By submitting the essay, the author grants ECCRI CIC exclusive, royalty-free rights to use, reproduce, publish, distribute, and display the essay for purposes related to the Competition, including but not limited to educational, promotional, and research-related activities.

The author represents, warrants, and agrees that no essay submitted as part of the essay prize competition violates or infringes upon the rights of any third party, including copyright, trademark, privacy, publicity, or other personal or proprietary rights, breaches, or conflicts with any obligation, such as a confidentiality obligation, or contains libellous, defamatory, or otherwise unlawful material.

The author agrees that the organizers can use your name (or your pseudonym) and an image of you in association with your essay for purposes of publicity, promotion and any other activity related to the exercise of its rights under these Terms.

The organizers may remove any essay-related content from its platforms at any time and without explanation.

The organizers may block contributions from particular email or IP addresses without notice or explanation.

The organizers may enable advertising on its platforms and associated social media accounts, including in connection with the display of your essay. The organizers may also use your Material to promote its products and services.

The organizers may, at its sole discretion, categorise Material, whether by means of ranking according to popularity or by any other criteria.

Data Protection

Personal information collected in connection with the Competition will be processed in accordance with Virtual Routes’ Privacy Policy. Participants agree to the collection, processing, and storage of their personal data for the purposes of the Competition.

Liability and Indemnity

ECCRI CIC, MSC, and the Sponsor will not be liable for any damages arising from participation in the Competition, except where prohibited by law.

Participants agree to indemnify ECCRI CIC, MSC, and the Sponsor against any claims, damages, or losses resulting from a breach of these T&Cs.

General Conditions

ECCRI CIC reserves the right to cancel, suspend, or modify the Competition or these T&Cs if fraud, technical failures, or any other factor beyond ECCRI CIC’s reasonable control impairs the integrity or proper functioning of the Competition, as determined by ECCRI CIC in its sole discretion.

Any attempt by any person to deliberately undermine the legitimate operation of the Competition may be a violation of criminal and civil law, and, should such an attempt be made, ECCRI CIC reserves the right to seek damages from any such person to the fullest extent permitted by law.

Governing Law

These Terms and Conditions are governed by the laws of the United Kingdom, without regard to its conflict of law principles. Any dispute arising out of or in connection with these Terms and Conditions, including any question regarding its existence, validity, or termination, shall be referred to and finally resolved by the courts of the United Kingdom. The participants agree to submit to the exclusive jurisdiction of the courts located in the United Kingdom for the resolution of all disputes arising from or related to these Terms and Conditions or the Competition.