Submit your essay to the AI-Cybersecurity Essay Prize Competition by January 2, 2025.
The AI-Cybersecurity Essay Prize Competition

Responsible development can help harness AI advancements

The potential of Artificial Intelligence (AI) has led to fast adoption and much hype, but the fragility and opacity of AI systems raise ethical and reliability concerns
Main Top Image
This image was created with the assistance of Midjourney

Artificial Intelligence (AI) has become a household word. Applications like ChatGPT can be used to develop plans to learn a language, offer cooking ideas based on what’s in the pantry, and even write this article about the promises and pitfalls of AI – though we wrote this the old-fashioned way.

Businesses have also embraced AI to automate routine tasks. Lawyers use AI to prepare legal briefs, programmers to generate code, and airlines to improve flight route planning. Forbes Magazine’s survey of 600 businesses noted that customer service, cybersecurity, and fraud management are the most popular applications.

Governments and militaries are also participating in the AI ‘gold rush’. Reports from Ukrainian battlefields illustrate how attackers and defenders use AI to pilot drones and make sense of the battlefield. The US Department of Defense in 2018 issued an AI strategy because AI “is essential for protecting the security of our nation, preserving access to markets that improve our standard of living, and ensuring that we are capable of passing intact to the younger generations the freedoms we currently enjoy.” Not to be outdone, Russian President Vladimir Putin said Russia must up its game or be left behind the West.

Longstanding warnings

The promises of AI are significant, but all new technology brings pitfalls.

Stephen Hawking warned a decade ago, “The development of full artificial intelligence could spell the end of the human race.” In 2023, The Future of Life sponsored an open letter that “called for a pause of at least six months on the riskiest and most resource-intensive AI experiments”; Elon Musk is among the tens of thousands of signatories. The letter cited many reasons to pause AI experiments, including the high probability potential that machines would surpass humans in intelligence. Although more science fiction than science, the recent biopic film Oppenheimer inspires scientists to explain the consequences of their inventions.

Governments have been so surprised by the potential of ChatGPT since its launch in November 2022 that they have been energised to restrict the deployment of AI. Italy imposed a temporary ban on using AI over privacy concerns in April. The United States issued an executive order to protect society from AI in October. The European Union is expected to issue new guidelines as it seeks to balance benefits with risks to individuals and societies.

In November 2023, in an AI summit at Bletchley Park, a cross-section of academic, industry, and policymakers affirmed that “AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.” The first of its kind, attendees included US Vice President Kamala Harris, European Commission President Ursula von der Leyen, and corporate CEOs of AI companies.

Governments struggle between regulating AI development and encouraging it given its potential.

The problems with AI

AI systems are trained neural networks, not computer programmes. A neural net has many artificial neurons with parameters on neuron inputs that are adjusted (trained) to achieve a close match between the actual and desired outputs. The inputs (stimuli) and desired outputs (responses) constitute a training set and the process of training a neural net is called machine learning (ML). On text, a neural net predicts the next word in a sentence.

Researchers soon discovered some unanticipated behaviours of ML systems that cast doubt on their reliability. When training sets were created from historical data, data biases were imported into the ML system. For example, ML systems trained on studies of criminal recidivism exhibited bias that resulted in bail judges overestimating pretrial crime risk based on race.

Similarly, in 2013 researchers discovered that deep neural networks (they have many layers of neurons) can be extremely fragile. Small changes in the training data for a neural network can cause outputs to be misclassified – that is produce erroneous results. These are called adversarial inputs. This is now a common weakness in ML systems that must be understood and protected against.

In 2014, Ian Goodfellow invented generative adversarial networks (GANs) to produce better AI-generated images. These are pairs of competing neural nets, one generating images and a second assessing their quality. The competition between the generator and the discriminator leads to excellent images. However, GANs can be used to make ‘deep fake audio recordings and videos that are so realistic they can be embarrassing or have national security implications, for example, by circulating a fabricated image showing the White House exploding. In a meeting in mid-November, the US and China agreed to discuss reducing AI risks and imposing restrictions.

Another blow to the promises of AI emerged in late 2022. Goldwasser et al observed that training ML systems is so onerous, requires such great expertise, powerful computers, and electrical power that many organisations will want to outsource the task of training. However, when this is done, outsiders can insert an undetectable ‘back door’ that can alter the output of the ML system. For example, compromised ML has the potential to misinterpret the white side of a truck as an open road and cause collisions, or destroy the reputation of a person or corporation by deploying chatbots that produce offensive outputs.

Ethics

Applying ethics could go some way towards managing these potential downsides. In our book Security for the Cyber Age, we examine ethical AI principles announced by the OECD, the US Department of Defense, and IBM. One of the main issues is the transparency of AI operations, and that, for example, means companies need to have access to technical experts who understand how the AI works and can audit it. At the same time, AI developers must create guardrails that build on norms created by international organisations, governments, corporations, and civil society.

Research on AI should not be halted. Instead, it should address the performance and ethical issues discussed above so that the advantages of AI can be harnessed and its pitfalls minimised. Technology is neutral, but its responsible development and use is an obligation for academics, practitioners, developers, and policymakers.


Note: The views expressed in this publication are those of the author and do not necessarily reflect the official policy or position of the Naval War College, Department of the Navy, Department of Defense, or the US government.

Terms and Conditions for the AI-Cybersecurity Essay Prize Competition

Introduction

The AI-Cybersecurity Essay Prize Competition (the “Competition”) is organized by Virtual Routes (“Virtual Routes”) in partnership with the Munich Security Conference (“MSC”). It is sponsored by Google (the “Sponsor”). By entering the Competition, participants agree to these Terms and Conditions (T&Cs).

Eligibility

The Competition is open to individuals worldwide who are experts in the fields of cybersecurity and artificial intelligence (“AI”). Participants must ensure that their participation complies with local laws and regulations.

Submission Guidelines

Essays must address the question: “How will Artificial Intelligence change cybersecurity, and what are the implications for Europe? Discuss potential strategies that policymakers can adopt to navigate these changes.”

Submissions must be original, unpublished works between 800-1200 words, excluding footnotes but including hyperlinks for references.

Essays must be submitted by 2 January 2025, 00:00 am CET., through the official submission portal provided by Virtual Routes.

Only single-authored essays are accepted. Co-authored submissions will not be considered.

Participants are responsible for ensuring their submissions do not infringe upon the intellectual property rights of third parties.

Judging and Awards

Essays will be judged based on insightfulness, relevance, originality, clarity, and evidence by a review board comprising distinguished figures from academia, industry, and government.

The decision of the review board is final and binding in all matters related to the Competition.

Prizes are as follows: 1st Place: €10,000; Runner-Up: €5,000; 3rd Place: €2,500; 4th-5th Places: €1,000 each. The winner will also be invited to attend The Munich Security Conference

Intellectual Property Rights

The author retains ownership of the submitted essay.

By submitting the essay, the author grants Virtual Routes exclusive, royalty-free rights to use, reproduce, publish, distribute, and display the essay for purposes related to the Competition, including but not limited to educational, promotional, and research-related activities.

The author represents, warrants, and agrees that no essay submitted as part of the essay prize competition violates or infringes upon the rights of any third party, including copyright, trademark, privacy, publicity, or other personal or proprietary rights, breaches, or conflicts with any obligation, such as a confidentiality obligation, or contains libellous, defamatory, or otherwise unlawful material.

The author agrees that the organizers can use your name (or your pseudonym) and an image of you in association with your essay for purposes of publicity, promotion and any other activity related to the exercise of its rights under these Terms.

The organizers may remove any essay-related content from its platforms at any time and without explanation.

The organizers may block contributions from particular email or IP addresses without notice or explanation.

The organizers may enable advertising on its platforms and associated social media accounts, including in connection with the display of your essay. The organizers may also use your Material to promote its products and services.

The organizers may, at its sole discretion, categorise Material, whether by means of ranking according to popularity or by any other criteria.

Data Protection

Personal information collected in connection with the Competition will be processed in accordance with Virtual Routes’ Privacy Policy. Participants agree to the collection, processing, and storage of their personal data for the purposes of the Competition.

Liability and Indemnity

Virtual Routes, MSC, and the Sponsor will not be liable for any damages arising from participation in the Competition, except where prohibited by law.

Participants agree to indemnify Virtual Routes, MSC, and the Sponsor against any claims, damages, or losses resulting from a breach of these T&Cs.

General Conditions

Virtual Routes reserves the right to cancel, suspend, or modify the Competition or these T&Cs if fraud, technical failures, or any other factor beyond Virtual Routes’ reasonable control impairs the integrity or proper functioning of the Competition, as determined by Virtual Routes in its sole discretion.

Any attempt by any person to deliberately undermine the legitimate operation of the Competition may be a violation of criminal and civil law, and, should such an attempt be made, Virtual Routes reserves the right to seek damages from any such person to the fullest extent permitted by law.

Governing Law

These Terms and Conditions are governed by the laws of the United Kingdom, without regard to its conflict of law principles. Any dispute arising out of or in connection with these Terms and Conditions, including any question regarding its existence, validity, or termination, shall be referred to and finally resolved by the courts of the United Kingdom. The participants agree to submit to the exclusive jurisdiction of the courts located in the United Kingdom for the resolution of all disputes arising from or related to these Terms and Conditions or the Competition.