Anthropic Just Leaked Claude Code's Source Code. All 500,000 Lines of It.

The company that markets itself as the safety-first AI lab just accidentally shipped its entire proprietary codebase to the public. Not a config file. Not a stray API key. The full source code for Claude Code — nearly 2,000 files, over 500,000 lines — sitting wide open on a public GitHub mirror within hours, racking up 84,000 stars before Anthropic could blink.
This is Anthropic's second major security blunder in a single week. And it happened because someone forgot to check a build config.
Let's break down what actually happened, what the code revealed, and why this matters beyond the tech headlines.
What Actually Happened
On March 31, 2026, developers noticed that Anthropic had released version 2.1.88 of the Claude Code npm package containing a source map file — a file type normally used for debugging — that pointed directly to a zip archive on Anthropic's Cloudflare R2 storage bucket. Inside that archive: nearly 2,000 TypeScript files and over 512,000 lines of code.
Security researcher Chaofan Shou was the first to publicly flag it, posting on X: "Claude code source code has been leaked via a map file in their npm registry!" That post went on to amass more than 28.8 million views.
Snapshots of the source code were quickly backed up in a GitHub repository that has been forked more than 41,500 times, disseminating it to the masses and ensuring that Anthropic's mistake remains the AI and cybersecurity community's gain.
Anthropic confirmed it. An Anthropic spokesperson said: "No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
Human error. A single misconfigured build file. That's all it took.
As software engineer Gabriel Anhaia noted in his analysis: "A single misconfigured .npmignore or files field in package.json can expose everything."
And to make it worse: Anthropic acquired Bun at the end of last year, and Claude Code is built on top of it. A Bun bug filed on March 11 reports that source maps are served in production mode even though Bun's own docs say they should be disabled. The issue was still open when the leak happened.
Anthropic's own toolchain may have shipped a known bug that exposed their own product.
This Is the Second Blunder in a Week
The leak came just days after Fortune reported that Anthropic had inadvertently made close to 3,000 files publicly available, including a draft blog post that detailed a powerful upcoming model presenting unprecedented cybersecurity risks.
An Anthropic spokesperson attributed that earlier incident to "human error in the CMS configuration," stating that an issue with one of their external CMS tools led to draft content being accessible.
Two separate incidents. Two different systems. One week apart. That is not a coincidence of bad luck — that is an operations problem.
Some people on social media are already wondering if someone inside is doing this on purpose. Probably not, but it's a bad look either way.
What the Leaked Code Actually Revealed
This is where it gets genuinely interesting. The source code did not just expose how Claude Code works under the hood. It pulled back the curtain on features not yet shipped, an unreleased model, and a fundamental shift in how Anthropic is building AI agents.
The Capybara Model Is Real
The leaked source code provided further evidence that Anthropic has a new model with the internal name Capybara — also referred to as Mythos — that the company is actively preparing to launch. According to Roy Paz, a senior AI security researcher at LayerX Security, it is likely that the company may release both a fast and slow version of the new model, based on its apparently larger context window, and that it will be the most advanced model on the market.
KAIROS: Claude Going Autonomous
The leaked code pulled back the curtain on "KAIROS" — a feature flag mentioned over 150 times in the source. KAIROS represents a fundamental shift in user experience: an autonomous daemon mode. While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles background sessions and employs a process called autoDream, in which the agent performs memory consolidation while the user is idle — merging observations, removing logical contradictions, and converting vague insights into absolute facts.
That is not a minor feature update. That is a complete rethink of what an AI coding assistant is.
A Three-Layer Memory Architecture
The most significant takeaway for competitors lies in how Anthropic solved "context entropy" — the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity. The leaked source reveals a sophisticated, three-layer memory architecture. At its core is MEMORY.md, a lightweight index of pointers that is perpetually loaded into context. This index does not store data — it stores locations. Actual project knowledge is distributed across topic files fetched on demand, while raw transcripts are never fully read back into context, but merely searched for specific identifiers.
The code confirms that Anthropic's agents are instructed to treat their own memory as a "hint," requiring the model to verify facts against the actual codebase before proceeding.
This architecture is arguably the real competitive moat behind Claude Code's reliability. And now every competitor has a blueprint for it.
Unshipped Features Sitting in the Codebase
The leaked code contained dozens of feature flags for capabilities that appear fully built but have not yet shipped, including: the ability for Claude to review what was done in its latest session to improve future performance; a persistent assistant running in background mode that lets Claude Code keep working even when a user is idle; and remote capabilities allowing users to control Claude from a phone or another browser.
The Concurrent Supply Chain Attack
Here is the part most coverage is burying — and you should not ignore it.
There was a concurrent, separate supply chain attack on the axios npm package. If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a cross-platform Remote Access Trojan. You should immediately search your project lockfiles for these specific versions or the dependency plain-crypto-js.
Beyond that, attackers are already capitalising on the leak to typosquat internal npm package names in an attempt to target those trying to compile the leaked Claude Code source. These are currently empty stubs but, as one security researcher put it: "Squat the name, wait for downloads, then push a malicious update that hits everyone who installed it."
The source code leak is a PR problem for Anthropic. The supply chain attack is a real threat to developers. Know the difference.
The Competitive Damage
For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualised revenue run-rate as of March 2026, the leak is more than a security lapse — it is a strategic haemorrhage of intellectual property. Claude Code alone has achieved an annualised recurring revenue of $2.5 billion, a figure that has more than doubled since the beginning of the year.
At least some of Claude Code's capabilities come not from the underlying large language model itself but from the software harness that sits around it — instructing it how to use other tools, providing guardrails, and governing its behaviour. It is this agentic harness that has now leaked. The leak potentially allows a competitor to reverse-engineer how Claude Code's agentic harness works and use that knowledge to improve their own products.
The bottom line: the leak will not sink Anthropic, but it gives every competitor a free engineering education on how to build a production-grade AI coding agent and what tools to focus on.
What This Means for Businesses Using AI Tools
Three things worth taking seriously.
Your AI vendor's security posture matters. If the company building your AI stack cannot stop themselves from leaking their own source code twice in a week, ask harder questions about what data they hold on your behalf and how well it's protected.
Unreleased model intelligence is now public. If you are evaluating AI tools for your business, the Capybara model is coming. It will likely outperform the current Claude lineup. Factor that into any long-term contract or platform decision you make in Q2 2026.
Supply chain attacks are accelerating. This incident showed how quickly bad actors move when a high-profile exposure happens. If your dev team runs npm packages, implement lockfile verification, audit regularly, and assume that anything installed during a high-profile incident window needs checking.
The Brutal Summary
Anthropic built a $19 billion business on the back of being the responsible, safety-conscious AI lab. In one week, they accidentally exposed an unreleased model's details through a CMS misconfiguration, then followed it up by shipping 500,000 lines of their most commercially important product's source code in a public npm package.
No customer data was lost. No model weights were exposed. But the agentic blueprint behind Claude Code — the memory architecture, the KAIROS daemon, the unshipped features, the Capybara roadmap — is now in the hands of every developer, every competitor, and every security researcher on the planet.
The company that markets itself as the safety-first AI lab just shipped its own source code to the public. That is Anthropic's second major security blunder in a week.
The code will be studied, forked, and iterated on for months. Anthropic will recover. But the gap they had on their competitors just got a lot smaller overnight.