Europe forgets its bug hunters at its own peril

Scientist wearing blue protective gloves examining tick specimen under magnifying glass while performing medical research in sterile laboratory setting

Photo: Elena Shishkina/Shutterstock

In October 2022, three students from the University of Malta sent an email to a popular app, FreeHour, saying they’d found vulnerabilities – weaknesses in code that attackers can exploit to steal data, disrupt services, or take control of systems – that disclosed user data and allowed them to change content on the app. Instead of thanking them for their help, FreeHour reported them to the police. The students were arrested, strip searched, and had their electronic devices confiscated. They were ultimately charged with gaining unauthorised access to the app and extortion, facing years in prison and fines up to €23,293. Their professor, who proofread their email to FreeHour before they sent it, was charged as an accomplice.

While this case was making its way through the Maltese courts, the EU was beginning to take vulnerability disclosure more seriously. In 2023, the EU’s second Network and Information Security (NIS2) Directive entered into force, requiring member states to designate national coordinators for the disclosure of vulnerabilities. In 2025, the EU launched the European Vulnerability Database, just as the Cyber Resilience Act began reshaping how companies report and fix such flaws. By September this year, manufacturerswill need to notify authorities of actively exploited vulnerabilities within strict timelines. The message from Brussels is clear: vulnerabilities must be reported faster, handled systematically, and made visible across the single market.

This urgency is justified. Water infrastructure, hospitals, and industrial systems all depend on software. The attack surface is expanding across critical infrastructure and supply chains. Vulnerabilities are no longer isolated IT flaws. They are systemic risks.

Coordinated vulnerability disclosure, in which researchers report flaws so fixes can be developed before public release, functions as an early warning system. It shortens the window between discovery and patching, preventing cascading failures.

Policymakers have built an increasingly sophisticated system to receive vulnerability reports. They have done far less to harmonise protections for those who report them.

If faster and broader disclosure is the goal, safe harbour protections for independent researchers must be enacted consistently across member states. Without that, Europe’s vulnerability regime risks becoming infrastructure without input.

Vulnerability disclosure depends on individuals, not institutions

Vulnerability disclosure typically begins when an individual identifies a vulnerability and must decide whether and how to notify the affected vendor or authority. Much vulnerability discovery comes from independent researchers, individuals who test and analyse systems on their own initiative – motivated by professional ethics, intellectual curiosity, reputational recognition, or financial rewards – rather than as employees of the affected company or a government agency. The same holds in open source projects, where volunteers uncover flaws in widely used components embedded across commercial and public systems.

Large vendors such as Google and Microsoft run their own bug bounty programmes, which define the testing scope, reporting procedures, and rewards for responsible disclosure. Their annual statistics consistently show independent researchers topping contributor rankings. Many other companies rely on third-party platforms such as HackerOne and Bugcrowd to administer similar schemes. Within these structured frameworks, researchers operate under defined authorisation and liability terms. Outside them, identifying a vulnerability may require probing systems without explicit permission, conduct that in many member states can qualify as criminal unauthorised access, even when done in good faith.

Because coordinated disclosure is voluntary in practice and depends on professional norms, mutual trust, and intrinsic motivation, when legal boundaries are unclear or good faith reporters face the prospect of investigation, hesitation follows.

The EU regulates vendors but leaves researchers exposed

Recent EU legislation reflects growing recognition that vulnerability governance is a systemic issue. Yet it focuses primarily on the receiving side of disclosure: how companies and authorities process reports once they arrive. It does far less to clarify protections for those who submit them.

Member state approaches to independent security research vary widely. Some countries, such as Poland, Belgium, France, and Lithuania, have introduced statutory frameworks that conditionally protect good faith vulnerability research, offering a comparatively higher degree of legal certainty, even if implementation challenges remain. Others, like the Netherlands, rely on policy guidance or prosecutorial discretion rather than explicit legal carve-outs. In several other member states, coordinated disclosure channels remain unclear, and protections for independent researchers even more so.

Recent cases illustrate how uneven protections for good-faith security researchers remain across the EU. While the Maltese students were ultimately pardoned in July 2025, they still suffered through the exhausting slog of years of legal process, confiscated equipment, and professional barriers. Meanwhile, in Germany in 2024, a court upheld the conviction of a freelance IT consultant who had reported a vulnerability exposing data from roughly 700,000 users, ruling that password-protected access constituted criminal unauthorised access, despite his intent to report the flaw.

These cases highlight a structural asymmetry. EU law increasingly harmonises obligations for manufacturers and authorities, while researchers remain exposed to fragmented national criminal regimes.

Incentives shape disclosure

The EU’s current trajectory risks creating an incentive mismatch. While the EU has made reporting obligations more consistent and built disclosure infrastructure, legal protections for those who discover and report vulnerabilities remain uneven. The result is paradoxical: authorities may become better prepared to process reports, yet the willingness to submit them may decline.

This is not hypothetical. Researchers may be guided by professional ethics and a desire to improve systemic security, but they are also rational actors. Disclosure decisions depend on incentives and clarity. When reporting is risky, alternative pathways become more attractive: retaining findings, monetising them, or limiting disclosure to selective recipients.

Coordinated vulnerability disclosure is a high-trust system. It relies on the expectation that good faith reporting will lead to remediation, not prosecution. As political scientist Francis Fukuyama has argued, high-trust environments reduce friction and enable cooperation without heavy enforcement. When the legality of discovery itself is uncertain, that trust erodes: researchers withhold findings, route them through intermediaries, or disengage.

If the EU views vulnerability disclosure as a resilience strategy, it must treat independent researchers as strategic contributors, not legal grey zones.

Legal clarity doesn’t mean legal immunity

There are legitimate concerns. Criminal law in the EU remains primarily a matter for member states. Unauthorised access, even when well-intentioned, can cross legal boundaries. Any effort to harmonise researcher protections across 27 jurisdictions must therefore navigate different legal cultures and constitutional constraints.

While challenging, this is precisely why clarity matters. When EU-level legislation creates uniform reporting obligations but leaves researcher liability governed by fragmented national criminal law, uncertainty becomes structural rather than incidental.

Safe harbour does not legalise unlawful behaviour, nor does it grant blanket immunity. It defines limited conditions under which proportionate, good-faith security research conducted for responsible disclosure is not treated as criminal conduct. It narrows uncertainty while preserving accountability.

The EU cannot build a reporting regime without reporters

Brussels is right about the stakes. The attack surface is expanding, and attackers are scanning continuously. They exploit wherever friction is lowest. In this environment, faster vulnerability reporting is not bureaucratic ambition. It is a strategic necessity. But coordinated vulnerability disclosure is not a purely administrative process. It is a social system that depends on individuals who notice flaws and decide to act.

The EU has invested heavily in building the machinery to receive reports. It now needs to invest in the legal and normative foundations that encourage people to submit them. That could begin with encouraging member states to adopt clear statutory safe harbour provisions for good faith research, defining minimum procedural standards for coordinated disclosure and reducing uncertainty in cross-border criminal law within the single market.

Harmonised safe harbour protections would not weaken cybersecurity. They would strengthen it by reinforcing trust and reducing hesitation across borders. If Europe wants faster and broader vulnerability reporting, it must protect the people who make it possible.