The NHS is reversing its open-source policy after Anthropic's Mythos AI demonstrated autonomous hacking capabilities, highlighting a new era of enterprise AI security risks for critical infrastructure.
The UK's National Health Service is quietly walking back one of its foundational software principles — and a single AI model is the reason why.
In late April 2026, NHS England moved to restrict public access to software code used in its systems, citing fears that Anthropic's Mythos AI — a frontier model with demonstrated autonomous computer-hacking capabilities — could be used to identify and exploit vulnerabilities in critical healthcare infrastructure. The decision marks a significant break from the NHS's longstanding commitment to open-source software, a policy underpinned by public funding obligations and transparency mandates. For technology leaders watching enterprise AI security risks evolve in real time, this incident is a bellwether.
What Is Mythos, and Why Is It Different?
Mythos is Anthropic's latest frontier model, and it represents a qualitative leap in AI-assisted offensive security. Unlike earlier systems that could assist with code review or flag theoretical vulnerabilities, Mythos has demonstrated the ability to autonomously navigate computer systems, identify exploitable weaknesses, and execute multi-step attack sequences with minimal human direction.
As New Scientist reported, Mythos's hacking capabilities are not merely theoretical — Anthropic's own safety evaluations flagged the model as crossing a threshold where autonomous exploitation of real systems becomes plausible. The model reportedly scored highly on internal benchmarks measuring the ability to conduct end-to-end cyberattacks without human intervention at each step.
This is the crux of the NHS's concern. When software code is publicly available — as open-source mandates require — an AI system capable of autonomously scanning, understanding, and probing that code for weaknesses transforms transparency from a civic virtue into a potential attack surface.
The Open-Source Dilemma in Public Sector AI
The NHS's commitment to open-source software is not incidental. It flows from a principle that publicly funded systems should be publicly accountable — that taxpayers who fund the infrastructure should, in theory, be able to inspect it. NHS England's digital strategy has historically aligned with the UK Government's broader open-source guidance, which encourages public bodies to publish code and prefer open solutions.
But that principle collides directly with a new threat model. Publicly accessible codebases are, by design, readable by anyone — including AI systems that can process and analyze them at machine speed and scale.
"The NHS is caught between two legitimate public interests: transparency in publicly funded systems, and the security of infrastructure that millions of people depend on for their lives."
The New Scientist investigation found that NHS England moved with unusual urgency — described as a "rush" — to restrict access to certain software repositories. The speed of the response signals that internal security assessments concluded the risk was not hypothetical or distant, but immediate.
This is not a decision taken lightly. Reversing open-source commitments in a public institution requires navigating procurement rules, political scrutiny, and the expectations of the developer community that has contributed to NHS digital systems. The fact that the NHS moved anyway suggests the threat calculus has fundamentally shifted.
A Critical Inflection Point for Government Tech Policy
The Mythos incident is the clearest example yet of what security researchers have warned about for years: that sufficiently capable AI systems would eventually force a rethink of the assumptions underlying open infrastructure.
For most of the past decade, the security community largely held that open-source software was more secure than proprietary alternatives — the "many eyes" principle suggested that publicly visible code would have its flaws identified and patched faster. That logic holds when the "eyes" examining the code are human security researchers with limited bandwidth.
It breaks down when those eyes belong to an AI system that can review millions of lines of code in hours, correlate known vulnerability patterns, and generate working exploits autonomously.
The NHS situation illustrates three compounding risks now facing any government body deploying or exposing software in an era of capable AI:
1. Attack surface amplification. Open codebases that were previously "secure through obscurity" in practice (even if not in principle) are now fully legible to AI-assisted attackers at scale.
2. Asymmetric capability gaps. Offensive AI capabilities are advancing faster than defensive tooling. A hospital trust's security team cannot realistically match the throughput of an AI system conducting reconnaissance.
3. Dual-use acceleration. Models like Mythos are not purpose-built cyberweapons — they are general-purpose AI systems whose capabilities include, among many other things, autonomous hacking. Restricting access to such models is practically impossible once they are widely deployed.
What Comes Next: Industry and Policy Implications
The NHS's move will not remain an isolated case. Security teams at other public health systems, utilities, and government agencies are almost certainly conducting similar reviews right now. The question is whether they will act with the same urgency — or wait for an actual breach.
For enterprise technology leaders, the immediate takeaways are pointed:
- Audit your public exposure. Any organization with publicly accessible code repositories, APIs, or infrastructure documentation should reassess what information is available to an AI-assisted attacker conducting automated reconnaissance.
- Reconsider the "security through openness" assumption. The calculus has changed. Transparency and security are no longer automatically aligned when adversaries have AI-scale analysis capabilities.
- Engage with AI safety evaluations. Anthropic's decision to publish Mythos's hacking capabilities through its safety evaluation process is, paradoxically, part of responsible disclosure — but it also means the threat profile is now publicly documented.
On the policy side, the incident puts pressure on the UK Government's open-source guidance to develop a more nuanced framework — one that distinguishes between types of public code exposure and establishes risk-tiered approaches for critical infrastructure.
The European Union's AI Act, which includes provisions around high-risk AI applications, may also face pressure to address the specific scenario of frontier models being used against critical infrastructure — a use case that existing frameworks were not designed to anticipate at this capability level.
The Broader Signal
The NHS's policy reversal is not, at its core, a story about one hospital system or one AI model. It is an early indicator of a structural tension that will define enterprise and government technology strategy for the next several years: the conflict between the openness that modern software ecosystems depend on and the security posture required when AI systems can weaponize that openness at scale.
Mythos may be the model that triggered this particular policy change. But the underlying dynamic — capable AI systems transforming the threat landscape for any organization with publicly visible infrastructure — will outlast any single model release.
For technology decision-makers, the NHS's uncomfortable choice is a preview of decisions that will land on many more desks before long.
Sources: New Scientist — NHS England rushes to hide software over AI hacking fears; New Scientist — Do you need to worry about Mythos, Anthropic's computer-hacking AI?
Last reviewed: May 03, 2026



