Combatting AI cybersecurity risks with digital identity verification

Artificial intelligence (AI) is transforming cybersecurity by reshaping how individuals, organizations, and governments interact online. As AI systems advance in their ability to mimic human behaviour, they introduce opportunities for more efficient, personalised, and meaningful online interactions. As it becomes more difficult to distinguish between human users and AI agents, there is also an increased risk of incidents such as AI-enabled cyberattacks (e.g., Sybil attacks), mis- and disinformation campaigns, and identity spoofing.
Policymakers in Europe have already made progress in addressing these risks – particularly in mandating the implementation of digital IDs across EU member states. However, further action is required to design strategies that safeguard digital ecosystems while preserving democratic values – including internet users’ privacy and anonymity. This essay therefore proposes three policy recommendations for European governments. These suggestions seek to complement existing EU efforts to digitise citizen’s IDs by introducing an additional pseudo-anonymous system for human identification (‘proof of humanity’).
Anthropomorphic AI and cybersecurity
Advanced multimodal AI agents are becoming increasingly anthropomorphic in their ability to imitate human behaviour and appearance. This development is unsurprising in itself; indeed, the first ever conversational AI, or ‘chatbot’ – Eliza – was created in 1966 to mimic a human psychotherapist. When considering such use cases, features of AI assistants such as the capacity to interact in natural language, increased agency, and high degrees of personalisation, can be especially helpful to users.
However, as a dual-use technology, the same human-like features of AI that proffer benefit to users also present potential for societal harm. From a cybersecurity perspective, it is therefore crucial that AI agents, and computers in general, are distinguishable from human users. For that reason, challenge-response countermeasures (e.g., CAPTCHA, reCAPTCHA) are heavily relied upon.
Without the effective ability to distinguish between humans and AI agents online, we could see a rise in AI-enabled mis- and disinformation, wherein AI either autonomously generates content or easily spreads incorrect information generated by humans. AI disinformation and realistic parody or satire deepfakes could begin to target European election results more heavily, was been the case in the USA in 2024. This is concerning, given that the proliferation of mis- and disinformation was already recognised in 2024 by the World Economic Forum as ‘the most severe global risk’ for the coming two years.
As AI develops capabilities to learn, plan, and act autonomously, it is also foreseeable that tools will be developed to link certain AI agents to humans’ online identities. This could then allow AI to act as an independent agent for humans, for example in signing contracts or performing tasks such as making online purchases. At such a time, it will be important for human users to be capable of verifying that these are legitimate transactions with human backing – and not arrangements with unverified independent AI bots.
The problem: telling humans and AI apart
It is becoming increasingly difficult to distinguish AI agents from human users online. AI has already broken the popular conception of the Turing test – it is often indistinguishable from humans in short, text-based conversations. This technological advance is in turn causing cybersecurity issues. For example, in March 2023 OpenAI’s GPT-4 was reportedly able to convince a human TaskRabbit worker to solve a CAPTCHA (which, perhaps incongruously, stands for ‘Completely Automated Public Turing test to tell Computers and Humans Apart’) on its behalf.
As such, it is clear both that current measures to tell humans and computers apart are becoming outdated and that there is a pressing need for proof of humanity (POH) online.
Proposed solution: proof of humanity systems
This essay seeks to highlight two distinct ways in which European governments could facilitate citizens in proving their humanness online. The first, digital IDs, is already in use, whilst the second, anonymous proof of humanity, is promising – but only currently being used by a single private international company.
Current digital identity frameworks, such as Estonia’s state-issued e-identity system and the EU’s forthcoming European Digital Identity (EUDI) wallet, offer a strong option for secure identity verification. Through digital wallets, all citizens of EU member states could in future pay bills, vote online, sign contracts, shop, and access their health information by linking their digital identity with personal attributes, such as a driver’s license or bank account.
That said, digital IDs are primarily designed for official transactions. They are not well-suited for contexts where users seek anonymity, such as whistleblowing, online activism, or participating in public debates or open forums.
There is therefore also a need for anonymous POH systems – technologies that confirm an individual’s humanness without exposing their identity. Otherwise referred to as ‘personhood credentials’ or ‘proof of personhood,’ such systems have been proposed by a group of more than 30 international researchers from OpenAI, Harvard, and Microsoft, among others. One US-based private cryptocurrency company, World (formerly WorldCoin), has also proposed such a system. It recently tried to use custom biometric hardware for proof of personhood. However, this move has been challenged by countries such as Portugal and Spain for being mis-aligned with European regulations such as the General Data Protection Regulation (GDPR).
Technologically, POH can be implemented in several different ways, although an offline component is always required. Initial identification of user’s proof of humanity can be undertaken using either biometric (e.g., iris scan) or non-biometric data, which can be disposed of once the data is converted to a cryptographic key.
Much like digital IDs, this key can also be stored digitally on individual’s devices. Unlike digital IDs, however, users’ anonymity can be preserved and verified through a zero-knowledge proof (ZKPs) – a cryptographic protocol for one party (the prover) to prove to another party (the verifier) that they possess certain data without revealing any information about the data itself.
Policy recommendations
This leads to three recommendations. First, European policymakers, within and beyond the EU (e.g., the UK, Switzerland, and Norway), should create a joint dual-track verification system that combines linked identity (such as a digital ID) to enable secure authentication for transactions and official services and anonymous proof of humanity that utilises cryptographic methods (e.g., ZKPs) to verify ‘humanness’ without linking user activity to real-world identity.
Second, governments across Europe are awakening to the potential security and legal implications of allowing foreign-based private companies to issue online proof of identity to their citizens. However, there currently is not a clear or unified European response to the use of biometric-based ‘proof of personhood’ credentials, for which there is clearly global demand. To address this, the EU should consider issuing guidelines, or, more comprehensively, a regulatory framework based on the EU’s current European Digital Identity Regulation that local governments can adopt to ensure that private initiatives adhere to European privacy laws. Additionally, key non-EU European countries (such as the UK) should consider aligning their national regulations with the EU’s proposals. Regulatory clarity could also help to prevent future misuse of POH systems and the monopolisation of identity technologies.
Third, the EU should establish a dedicated digital identity commission to oversee identity-related technologies, enforce data protection laws, and monitor emerging risks associated with AI advancements. This body could also promote cross-border cooperation to ensure the harmonised implementation of technology and regulations across EU member states.
A growing need for identity verification
AI is already reshaping cybersecurity and will continue to do so as technology advances. While digital identity frameworks like EUDI provide a foundation for identity verification, they must be complemented by POH systems to address AI’s growing capabilities. By leveraging cryptographic technologies such as zero-knowledge proofs and by regulating private identity providers, policymakers can protect European citizens without sacrificing users’ privacy or anonymity. With coordinated action, Europe can lead in balancing security, innovation, and human rights in the digital age.
This essay was awarded 2nd place in the AI-Cybersecurity Essay Prize Competition 2024-2025, organised in partnership between Binding Hook and the Munich Security Conference (MSC), and sponsored by Google.