Enlightenment lessons for the AI age

If we are to live and act responsibly in an age increasingly shaped by artificial intelligence (AI), we must answer: what can we know? In his well-known essay, ‘An Answer to the Question: What is Enlightenment?’ published in 1784, Immanuel Kant argues that:
‘Enlightenment is man’s release from his self-incurred tutelage. Tutelage is man’s inability to make use of his understanding without direction from another. Self-incurred is this tutelage when its cause lies not in lack of reason but in lack of resolution and courage to use it without direction from another. Sapere aude! “Have courage to use your own reason!” – that is the motto of enlightenment.’
At first glance, it may seem unusual, perhaps even anachronistic, to begin a discussion about AI and machine learning by invoking an 18th-century philosopher. Yet it is precisely in returning to the foundations of Western epistemology that we can begin to see more clearly the contours of our current predicament: defining the boundaries of human and machine knowledge, and governing the latter.
Kant was interested not only in what we know, but also in how we know and, more importantly, in the conditions that make knowledge possible. For Kant, knowledge does not come simply from experience or reason alone, but from the interplay between perception, what is given to us, and understanding, that which the mind contributes.
According to Kant, the world as we encounter it, which he called ‘phenomenon’, must conform to the mind’s own mental structures. The world as it is in itself, the ‘noumenon’, remains inaccessible. An object only exists insofar as it is constituted by a subject. This stance, which Kant called transcendental idealism, is a reminder that knowing is conditioned.
Among these conditions are Kant’s a priori forms of intuition, namely time and space, as well as his twelve categories of understanding, including causality, unity, and necessity. According to Kant, these innate modes of processing sensory data are not derived from experience but serve as the preconditions for having any experience at all.
In a similar way, AI-based systems rely on models, data representations, and algorithmic assumptions that preconfigure what the system is capable of recognising or producing. Kant’s insights into the structured nature of knowledge offer a valuable framework for critically examining both the capabilities and the inherent limits of AI systems such as large-language models (LLMs).
What LLMs lack
Shifting our gaze to LLMs, particularly how these models influence cybersecurity workflows, we find a strangely parallel situation. The Kantian subject possesses categories that make phenomenon possible, just as the AI model uses algorithms to structure and interpret input data. AI systems do not perceive reality. Like humans, they do not encounter noumenon. They model. They calculate. They represent.
While there are intriguing resemblances between the Kantian mind and AI systems – both depend on internal structures to process and organise inputs – it is the differences between them that offer deeper insights.
In the Kantian sense, LLMs do not possess transcendental structures; they lack the a priori forms of intuition, such as time and space, that serve as conditions of lived experience. What LLMs process are symbolic correlations – tokens and timestamps – not intuitive frameworks. Their ‘categories’ are not necessary conditions for cognition but probabilistic patterns derived from training data. These learned representations emerge from computational processes like backpropagation, not from the structure of human consciousness.
For Kant, the human mind possesses ‘apperception’, the capacity to reflect on itself as the basis of experience. Human understanding arises not just from processing data but from reflecting on the conditions of knowledge itself. Critical reflection is a precondition of knowledge. AI, by contrast, has no inner sense of its limits, no critical awareness of its own knowledge structures. It does not know it knows. Or fails to know.
This absence of ‘knowledge’ becomes critical in domains like cybersecurity, where the cost of epistemic error is not philosophical – it’s operational. Here, the integrity of identity, trust, and access is central to the security of digital systems and the people using them.
Despite these stakes, vendors are rapidly integrating generative AI (GenAI) and LLM agents into their workflows. These models are being asked to conduct complex tasks, analysing incidents, enhancing decision-making, suggesting playbook templates, detecting anomalies, enforcing dynamic authorisation, creating policies, and more. Without critical oversight, such delegation risks embedding unexamined assumptions into the core logic of access and control.
Human-in-the-loop
Although LLMs can perform a variety of tasks and serve as virtual assistants, they do not inherently prioritise truth like the Kantian subject does. However, as AI agents and LLMs take on greater roles in decision-making, there is an increasing temptation to treat their output as ‘epistemically neutral’. This means acting as though a machine’s score or recommendation simply reflects reality, rather than being a probabilistic guess structured by pre-existing assumptions. Such an approach risks blind trust in machine output.
‘Never trust, always verify’ is the core principle of zero trust security. In zero trust, no user or system is trusted by default; access must be continuously verified. Similarly, only a human can question and critically reflect on the system from which AI’s judgment emerges.
In cybersecurity, the pursuit of knowledge must take precedence over speed or automation, because, at its core, security is about protecting people, not just systems. Consequently, the cybersecurity profession has turned to a wide range of techniques such as AI guardrails, retrieval-augmented generation (RAG), access controls, and fine-tuned prompts to ground and impose order on unstructured data. For example, RAG reduces AI hallucinations by incorporating more information and basing responses on real, verifiable data.
Just as Kant insists the mind imposes order on sensory input, these mechanisms ensure that an LLM’s engagement with tools or data isn’t chaotic but follows a defined and more structured path. However, these techniques do not provide LLMs with true ‘knowledge’; rather, they simulate the imposition of structure on experience. They define how LLMs can access, structure, and interpret the world.
In a 1981 interview, French philosopher Michel Foucault noted that Kant’s article on the Enlightenment marked a turning point in modern philosophy by linking thought to its own time. Just as in Kant’s time, we must now ask: in what conditions can knowledge be possible? Kant would not ask us to reject machine knowledge, but to remain aware of its limits. In the end, humans must remain the final arbiter of meaning and truth.