Artificial Intelligence (AI) has become a household word. Applications like ChatGPT can be used to develop plans to learn a language, offer cooking ideas based on what’s in the pantry, and even write this article about the promises and pitfalls of AI – though we wrote this the old-fashioned way.

Businesses have also embraced AI to automate routine tasks. Lawyers use AI to prepare legal briefs, programmers to generate code, and airlines to improve flight route planning. Forbes Magazine’s survey of 600 businesses noted that customer service, cybersecurity, and fraud management are the most popular applications.

Governments and militaries are also participating in the AI ‘gold rush’. Reports from Ukrainian battlefields illustrate how attackers and defenders use AI to pilot drones and make sense of the battlefield. The US Department of Defense in 2018 issued an AI strategy because AI “is essential for protecting the security of our nation, preserving access to markets that improve our standard of living, and ensuring that we are capable of passing intact to the younger generations the freedoms we currently enjoy.” Not to be outdone, Russian President Vladimir Putin said Russia must up its game or be left behind the West.

Longstanding warnings

The promises of AI are significant, but all new technology brings pitfalls.

Stephen Hawking warned a decade ago, “The development of full artificial intelligence could spell the end of the human race.” In 2023, The Future of Life sponsored an open letter that “called for a pause of at least six months on the riskiest and most resource-intensive AI experiments”; Elon Musk is among the tens of thousands of signatories. The letter cited many reasons to pause AI experiments, including the high probability potential that machines would surpass humans in intelligence. Although more science fiction than science, the recent biopic film Oppenheimer inspires scientists to explain the consequences of their inventions.

Governments have been so surprised by the potential of ChatGPT since its launch in November 2022 that they have been energised to restrict the deployment of AI. Italy imposed a temporary ban on using AI over privacy concerns in April. The United States issued an executive order to protect society from AI in October. The European Union is expected to issue new guidelines as it seeks to balance benefits with risks to individuals and societies.

In November 2023, in an AI summit at Bletchley Park, a cross-section of academic, industry, and policymakers affirmed that “AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.” The first of its kind, attendees included US Vice President Kamala Harris, European Commission President Ursula von der Leyen, and corporate CEOs of AI companies.

Governments struggle between regulating AI development and encouraging it given its potential.

The problems with AI

AI systems are trained neural networks, not computer programmes. A neural net has many artificial neurons with parameters on neuron inputs that are adjusted (trained) to achieve a close match between the actual and desired outputs. The inputs (stimuli) and desired outputs (responses) constitute a training set and the process of training a neural net is called machine learning (ML). On text, a neural net predicts the next word in a sentence.

Researchers soon discovered some unanticipated behaviours of ML systems that cast doubt on their reliability. When training sets were created from historical data, data biases were imported into the ML system. For example, ML systems trained on studies of criminal recidivism exhibited bias that resulted in bail judges overestimating pretrial crime risk based on race.

Similarly, in 2013 researchers discovered that deep neural networks (they have many layers of neurons) can be extremely fragile. Small changes in the training data for a neural network can cause outputs to be misclassified – that is produce erroneous results. These are called adversarial inputs. This is now a common weakness in ML systems that must be understood and protected against.

In 2014, Ian Goodfellow invented generative adversarial networks (GANs) to produce better AI-generated images. These are pairs of competing neural nets, one generating images and a second assessing their quality. The competition between the generator and the discriminator leads to excellent images. However, GANs can be used to make ‘deep fake audio recordings and videos that are so realistic they can be embarrassing or have national security implications, for example, by circulating a fabricated image showing the White House exploding. In a meeting in mid-November, the US and China agreed to discuss reducing AI risks and imposing restrictions.

Another blow to the promises of AI emerged in late 2022. Goldwasser et al observed that training ML systems is so onerous, requires such great expertise, powerful computers, and electrical power that many organisations will want to outsource the task of training. However, when this is done, outsiders can insert an undetectable ‘back door’ that can alter the output of the ML system. For example, compromised ML has the potential to misinterpret the white side of a truck as an open road and cause collisions, or destroy the reputation of a person or corporation by deploying chatbots that produce offensive outputs.

Ethics

Applying ethics could go some way towards managing these potential downsides. In our book Security for the Cyber Age, we examine ethical AI principles announced by the OECD, the US Department of Defense, and IBM. One of the main issues is the transparency of AI operations, and that, for example, means companies need to have access to technical experts who understand how the AI works and can audit it. At the same time, AI developers must create guardrails that build on norms created by international organisations, governments, corporations, and civil society.

Research on AI should not be halted. Instead, it should address the performance and ethical issues discussed above so that the advantages of AI can be harnessed and its pitfalls minimised. Technology is neutral, but its responsible development and use is an obligation for academics, practitioners, developers, and policymakers.


Note: The views expressed in this publication are those of the author and do not necessarily reflect the official policy or position of the Naval War College, Department of the Navy, Department of Defense, or the US government.