The race to adopt AI has negative cybersecurity implications

AI's allure diverts attention from fundamental cybersecurity measures and common cyberattack techniques
Main Top Image
Image created with the assistance of Midjourney

Artificial intelligence (AI), especially generative AI, has taken the world by storm and brought new attention to its potential uses in cybersecurity.

AI and influence operations

Disinformation was a critical problem even before the advent of commercial AI. It works because it manipulates natural human tendencies toward certain types of information: novel, sensational, or material that confirms prior beliefs. It also exploits broader vulnerabilities in civil society and takes advantage of social media platforms to spread and cause malign effects. 

Technology companies further aid the spread of disinformation. They are reluctant to acknowledge that their platforms enable large-scale disinformation campaigns. As such, they have failed to moderate content or even allow researchers to access their content databases. In March, Meta announced it was discontinuing its CrowdTangle disinformation tracking tool, while Google’s Alphabet has reportedly reduced its misinformation team to just one person. Under Elon Musk’s ownership, disinformation has boomed on X (formerly Twitter). He has also attempted to sue researchers tracking content across the platform. Content moderation efforts are also hampered by a lack of linguistic expertise—an issue AI could help address in the future.

Malicious actors have already begun exploiting AI for cyber-enabled influence operations, particularly in content generation and deepfake videos and photos. A prominent example was Russia’s attempt to use a deepfake of Ukrainian president Volodomyr Zelenskyy to convince Ukrainians to surrender in March 2022. Freedom House’s latest report also highlights growing government use of generative AI for influence operations to consolidate domestic control. As the technology becomes even more accessible and affordable, a much wider variety of actors are likely to utilise it for such purposes. Therefore, AI is highly likely to increase the scope, speed, and sophistication of influence operations, exacerbating the associated negative consequences. 

The vulnerability of AI itself

Much has also been said about AI’s potential to assist cybersecurity, such as anticipating Distributed Denial-of-Service (DDoS) attacks, generating cybersecurity exercise scenarios, or detecting phishing campaigns. This could be a valuable asset in the future, but the current excitement surrounding AI is encouraging companies to integrate AI capabilities into digital products and services before fully considering security risks. German carmaker Volkswagen recently released a new car with ChatGPT enabled, driven in part by consumer demand.

Without sufficient oversight and robust guidelines, consumers use AI tools unsafely. Numerous cases have already been reported of employees entering confidential client information or sensitive business data into ChatGPT and exposing it to third parties. This has already resulted in a slowdown in adopting such software and even prohibitions, such as the Microsoft Co-Pilot for the US Congress.

Attackers can also influence AI products in various ways. They can tamper with AI training data to alter its outputs in ‘data poisoning’ attacks. They can manipulate the prompts of generative AI products through ‘prompt injections’: formulating requests in a manner that makes the AI behave in an unintended way. For example, an attacker could employ a hypothetical scenario to push an AI model into divulging prohibited information or engage in illegal behaviour, such as writing malicious code

Perhaps most worryingly, AI systems themselves represent a source of vulnerability. As with other software, AI systems often depend on third-party components and libraries, making them susceptible to supply chain disruptions or malicious injection attacks. In 2023, researchers found critical vulnerabilities in several AI models. Large Language Models (LLMs) also suffer from hallucinations. That is, they will produce outputs that seem plausible but are incorrect. As LLMs are also used to facilitate software engineering by asking them to create software code, hallucinations can lead to functioning code that contains severe vulnerabilities, increasing supply chain risk. 

Generative AI systems and other machine learning systems also face reliability and fragility issues: even when given the same input, there can be considerable discrepancies in the output, making it difficult to predict future system behaviour. It is not clear why the results are different. This could be a problem if key decisions or activities are outsourced to AI, as they could result in outcomes with serious real-life consequences. In January 2021, the Dutch government resigned after it was revealed that its algorithmic-based self-learning tax fraud assessment tool wrongly flagged hundreds of people as committing benefits fraud, causing significant and long-lasting financial and emotional distress to those affected.

AI detracts from more pressing issues

One of AI’s core issues is not with it but our approach to it: focusing on AI detracts attention from less flashy but still dangerous security failings. AI projects tend to be resource intensive, which can deprioritise more pressing security concerns. Organisations may neglect basic cyber hygiene measures that could ultimately provide opportunities to threat actors. For example, IBM’s 2024 X-Force report showed that the most common way to intrude into a system was not a flashy, AI-generated attack but instead the simple use of stolen credentials to access valid employee accounts. Stolen credentials are also valuable for lateral movement once an adversary is within a network. 

The X-Force report also highlighted that email-based phishing remains an important way to enter networks. Generative AI can create convincing phishing emails in multiple languages at greater speed, scale, and sophistication, but the deception works through email. The efficacy of this tactic—with or without AI—highlights a lack of awareness of common cyberattack techniques, indicating poor cyber hygiene. 

Other common cyberattack methods, such as exploiting software vulnerabilities, underscore this point. For example, in 2023, cybercriminals exploited a vulnerability in Progress Software’s MOVEit file transfer application to access the networks of thousands of clients. Affected victims included large financial institutions, government departments and critical infrastructure providers. The MOVEit incident demonstrates not just the danger posed by infrastructure vulnerabilities but also how third-party suppliers can become sources of risk themselves. 

Zero-day vulnerabilities like MOVEit can initially be difficult to defend against, but evidence shows that threat actors routinely rely on exploiting older vulnerabilities even when patches and mitigations exist. Statistics from the US Cybersecurity and Infrastructure Agency (CISA) show that some of the most common vulnerabilities exploited in 2022 were several years old, with patches and security updates available. In 2023, vendor research from CISA showed that vulnerabilities from 2019, 2017, and even 2012 were key targets for exploitation and initial network access. This evidence suggests that patching is sporadic and that more regular maintenance cycles are needed to avoid preventable attacks. 

Rather than being subject to flashy, AI-generated cyberattacks, many organisations are compromised because they lack basic cyber hygiene. This includes employees reusing passwords and lacking awareness and training regarding phishing and other attacks that manipulate their social media and networks. Basic software update and patching cycles make an entity far more resilient, even in the age of AI. 

The cybersecurity landscape is constantly evolving, and AI is just one catalyst for this. Although the technology can bring many potential positives, as it currently stands, AI has had a detrimental impact on cybersecurity as a whole. To reverse this trend, it is crucial to prioritise safety and security and carefully consider the potential risks that come with integrating AI systems. By doing so, organisations can harness the power of AI for positive change.