Anthropic Claude Code reliability and capacity crisis
TECH

Anthropic Claude Code reliability and capacity crisis

32+
Signals

Strategic Overview

  • 01.
    Anthropic is navigating a multi-front crisis with its Claude Code product in early April 2026. Enterprise developers report significant performance degradation in complex engineering tasks following a February update, with AMD's senior director of AI publicly documenting the regression across thousands of coding sessions. Simultaneously, the company's infrastructure is buckling under explosive demand growth, with 127 incidents in 90 days including 40 major outages.
  • 02.
    Adding to the turmoil, Claude Code's full source code was accidentally leaked via an npm package on March 31, exposing nearly 2,000 TypeScript files and unreleased features including an autonomous agent system called KAIROS. The leak attracted 84,000+ GitHub stars and 28.8 million views on X, drawing intense scrutiny of Anthropic's internal roadmap.
  • 03.
    Anthropic also banned the use of Claude flat-rate subscriptions with third-party agent frameworks like OpenClaw, cutting off over 135,000 instances and forcing users to migrate to pay-as-you-go API pricing -- a cost increase of up to 50x for some users. The company acknowledged that users are hitting usage limits far faster than expected, calling it a top priority.

Deep Analysis

The February regression: how a single update broke enterprise trust

The most technically significant thread in this crisis traces back to a specific date: February 12, 2026, when Anthropic rolled out thinking content redaction in Claude Code. According to AMD's Senior Director of AI Stella Laurenzo, who analyzed 17,871 thinking blocks and 234,760 tool calls across 6,852 coding sessions, the quality degradation correlated precisely with this update. The model began stopping reading code before making changes and defaulting to shallow reasoning -- what Laurenzo described as defaulting to "the cheapest action available."

This is not a vague complaint about vibes. The analysis documented 60-80% failure rates for multi-step tasks per SWE-EVO 2025 benchmarks, and the regression was measurable across enterprise-scale engineering workflows. The implication is deeply uncomfortable for Anthropic: by redacting thinking content -- likely to reduce compute costs or protect intellectual property in the reasoning chain -- the company may have inadvertently crippled the deep reasoning capabilities that made Claude Code attractive for complex engineering in the first place. Chandrika Dutt of Avasant framed it as fundamentally "a capacity and cost issue," suggesting Anthropic is making implicit tradeoffs between serving more users and maintaining reasoning depth. For enterprise customers who adopted Claude Code specifically for its superior reasoning on complex tasks, this tradeoff is unacceptable.

The OpenClaw economics: 135,000 agents exposing an unsustainable pricing model

The ban on Claude subscription use with OpenClaw and similar third-party frameworks reveals a fundamental pricing miscalculation. Over 135,000 OpenClaw instances were running on Claude flat-rate subscriptions, consuming compute at rates that were 5x cheaper than API pricing. When Anthropic's revenue more than doubled since January 2026 and business subscriptions quadrupled in Q1, much of that growth was coming from users whose actual compute consumption far exceeded what flat-rate plans were designed to cover.

The April 4 ban forced these users to migrate to pay-as-you-go API pricing, resulting in cost increases of up to 50x for some users. This is the classic platform economics problem: Anthropic's flat-rate subscription was priced for interactive human use, not for persistent autonomous agents running 24/7. The timing is notable -- Peter Steinberger, OpenClaw's creator, had already left to join OpenAI in February 2026, and the ban came just weeks later. The competitive dimension is sharp: OpenAI gains both the talent (Steinberger) and the grievance of 135,000+ displaced agent users. Anthropic reduced peak-hour quotas (5am-11am PT) that affected approximately 7% of users, but the OpenClaw ban represents a far more dramatic intervention -- essentially telling a large user segment that their use case is not welcome at current prices.

What the source code leak reveals about Anthropic's secret ambitions

The accidental leak of Claude Code's full source code via npm package version 2.1.88 on March 31, 2026, was more than an embarrassing operational failure. The 512,000+ lines of TypeScript across nearly 2,000 files exposed unreleased features that reveal where Anthropic is heading -- and the gap between its public messaging and internal development priorities. The most striking discovery was KAIROS, described as a persistent background agent that receives heartbeat prompts and can fix errors autonomously. Also revealed were a "dream mode" for continuous background ideation and an "undercover mode" for stealth open-source contributions.

The leak generated extraordinary attention -- 84,000+ GitHub stars, 82,000 forks, and 28.8 million views on the original X post. Anthropic characterized it as "a release packaging issue caused by human error, not a security breach," noting no customer data was exposed. However, the security implications extended beyond Anthropic itself: five typosquat npm packages were quickly created targeting developers attempting to compile the leaked code, turning curiosity into a supply-chain attack vector. Coming just five days after the Mythos model leak via a CMS cache, this was the second major accidental disclosure in a week, raising questions about Anthropic's operational controls during a period of rapid growth. The Fireship video covering the leak accumulated 2.96 million views, making this one of the most publicly visible incidents in Anthropic's history.

Competitor timing and the window of vulnerability

The convergence of Claude Code's reliability crisis with competitor moves has created a uniquely dangerous moment for Anthropic. As one X user observed, "OpenAI made a well-planned move, by releasing GPT-5 Codex right after Claude Code degraded." Whether or not the timing was deliberate, the perception matters: developers actively experiencing Claude Code failures, usage limits, and cost increases are being presented with alternatives at the exact moment of maximum frustration.

The social media sentiment is strikingly negative. Prominent developer Theo declared Claude Code "basically unusable," while security professional Dave Kennedy reported a 25% increase in defects from Claude over three weeks. On Reddit, communities like r/ClaudeCode and r/ClaudeAI are actively discussing the rate limit drain crisis, with sessions reportedly burning out in minutes. Some contrarian voices maintain that Claude Code still produces superior code quality when it works, but this is a dangerous qualifier -- "when it works" is not a viable value proposition for enterprise engineering workflows. With Claude Code currently authoring 4% of GitHub public commits and projected to reach 20%+ by end of 2026, the reliability stakes are enormous. If developers migrate away during this window, the network effects and workflow integrations that drive AI coding tool adoption could shift decisively toward competitors.

Historical Context

2025-11-01
Peter Steinberger released Clawdbot (later renamed OpenClaw), an open-source AI agent framework allowing persistent memory and tool access through messaging apps, which would eventually attract 135,000+ instances running on Claude subscriptions.
2026-02-12
Rollout of thinking content redaction correlated precisely with measured quality regression in complex long-session engineering workflows, as later documented by AMD's AI director.
2026-02-14
Steinberger announced he was leaving OpenClaw to join OpenAI, with Sam Altman publicly welcoming him, adding competitive tension to the Claude ecosystem.
2026-03-02
Major outage left claude.ai and associated applications inaccessible for several hours, part of a challenging March with 5 outages and 14+ product launches.
2026-03-26
Claude Mythos model details were accidentally leaked via a publicly accessible CMS cache, the first of two major leaks in quick succession.
2026-03-31
Claude Code's full source code was accidentally leaked via npm package version 2.1.88, exposing 512,000+ lines of TypeScript including unreleased autonomous agent features.
2026-04-04
Anthropic blocked Claude subscriptions from being used with OpenClaw and other third-party agent frameworks, forcing users to API pricing.
2026-04-06
Claude AI went down for over 8,000 users with elevated errors on login, the latest in a string of outages that saw 127 incidents in 90 days.

Power Map

Key Players
Subject

Anthropic Claude Code reliability and capacity crisis

AN

Anthropic

Developer of Claude Code, struggling to balance surging demand with infrastructure capacity while managing pricing and third-party access policies

AM

AMD (Stella Laurenzo)

Enterprise customer whose AI group senior director publicly criticized Claude Code's regression, documenting declining reasoning capabilities across 6,852 coding sessions

OP

OpenClaw / Peter Steinberger

Creator of the open-source AI agent framework cut off from Claude flat-rate subscriptions; Steinberger departed to join OpenAI in February 2026, adding competitive tension

EN

Enterprise developers and subscribers

Users bearing the brunt of outages, usage restrictions, and performance degradation, with some facing cost increases of up to 50x their previous monthly spend

THE SIGNAL.

Analysts

"Claude Code has regressed since February 2026 and cannot be trusted for complex engineering tasks. Her analysis of 17,871 thinking blocks and 234,760 tool calls across 6,852 sessions confirmed the degradation, noting: "When thinking is shallow, the model defaults to the cheapest action available: edit without reading, stop without finishing.""

Stella Laurenzo
Senior Director of AI, AMD

"The degradation is primarily a capacity and cost issue. "Complex engineering tasks require significantly more compute" than the system can sustain at scale, suggesting Anthropic is making implicit tradeoffs between throughput and reasoning depth."

Chandrika Dutt
Research Director, Avasant

"There is a quiet erosion of developer trust as the model shifts behavior under load. "What is happening is a quiet shift in how much developers trust the system when the stakes are high," pointing to a fundamental tension between scaling AI tools and maintaining the quality that attracted enterprise users."

Sanchit Vir Gogia
Chief Analyst, Greyhound Research
The Crowd

"Claude Code is basically unusable at this point. I give up."

@@theo2700

"openAI made a well-planned move, by releasing GPT-5 Codex right after Claude Code degraded. if you're frustrated with Claude's setback, GPT-5 Codex is worth trying for its deep IDE features and better reliability at this point, the only reason to pay for a Claude sub is its..."

@@slow_developer286

"Yah, I'm running 15 different projects, some of them larger enterprise components, its usually precision work for me, I've been using Claude as primary driver, and GPT as a bug hunter afterwards, seen over a 25% increase in defects from Claude over the past 3 weeks and..."

@@HackingDave29
Broadcast
Tragic mistake... Anthropic leaks Claude's source code

Tragic mistake... Anthropic leaks Claude's source code

BREAKING: Claude Code source leaked

BREAKING: Claude Code source leaked

I Fixed Claude's Token Limits. Here's How.

I Fixed Claude's Token Limits. Here's How.