Anthropic releases Claude Opus 4.7
TECH

Anthropic releases Claude Opus 4.7

46+
Signals

Strategic Overview

  • 01.
    Anthropic launched Claude Opus 4.7 on April 16, 2026 as its most powerful generally available model, emphasizing gains in software engineering, vision, instruction-following, and long-horizon agentic workflows.
  • 02.
    The model introduces a new 'xhigh' effort level between high and max, and Claude Code now defaults to xhigh across all plans while adding an /ultrareview command for dedicated review passes.
  • 03.
    Pricing is held at $5 per million input tokens and $25 per million output tokens, and Opus 4.7 ships across the Anthropic API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry under the identifier claude-opus-4-7.
  • 04.
    Anthropic explicitly frames Opus 4.7 as less broadly capable than the restricted Claude Mythos Preview, shipping 4.7 with automated cybersecurity safeguards and differentially reduced offensive-cyber capabilities.

The Model Anthropic Won't Sell You

The strangest thing about Opus 4.7 is how insistently Anthropic refuses to call it the best model it has built. Every official surface — the product page, the news post, even partner announcements — reiterates that 4.7 trails Claude Mythos Preview, a more capable model that is not generally available. Mythos is restricted to a handpicked group of cybersecurity organizations under a separate program, Project Glasswing, and evaluators including the UK AI Security Institute have stress-tested its multi-step cyberattack behavior. The company's own line is that what it learns from 4.7's real-world safeguards will 'help us work towards our eventual goal of a broad release of Mythos-class models.'

That framing converts a routine point release into a product-strategy signal. Gizmodo's read — that the 4.7 launch functions as a promotion for Mythos — is sharp but incomplete. The deeper pattern is a two-tier enterprise architecture: a broadly available frontier tier that any developer can hit through Bedrock, Vertex AI, or Foundry, and a restricted safety-gated tier above it that enterprises will eventually buy access to, presumably at premium pricing and with compliance paperwork attached. Opus 4.7 is simultaneously a shipping product and a public beta for the safeguards that will gate Mythos's commercial rollout. Every request routed through 4.7's automated cyber-abuse detection is, in effect, training data for what Mythos's enterprise terms of service will look like.

Flat Pricing, Rising Bills

The headline number — $5 per million input tokens, $25 per million output tokens, unchanged from Opus 4.6 — is the quiet half of the launch. The loud half, buried in the release notes and release-note commentary, is that Opus 4.7 ships with a new tokenizer that maps the same input into roughly 1.0x to 1.35x more tokens depending on content type. Same prompt, same dollar rate, up to 35% more tokens billed. The-Decoder and 9to5Mac both flagged this; community readers on Reddit treated it as a stealth price hike. For code-heavy workloads, which is exactly where Anthropic is pushing this model hardest, the multiplier skews toward the high end of that range.

Layer on top: Claude Code raises its default effort to xhigh, which produces longer reasoning traces and therefore more output tokens — the expensive side of the meter. GitHub Copilot is charging a 7.5x premium-request multiplier on Opus 4.7 through April 30, explicit about the cost. Notion, Cursor, and Devin inherit the same token-count inflation through the underlying model. On the optimistic side, Box reports 56% fewer model calls, 50% fewer tool calls, and 30% fewer AI Units on its workflows, and prompt caching can still cut effective costs by up to 90%. So the real-world picture is bifurcated: disciplined users who exploit caching and let 4.7's fewer-call tendency work in their favor may come out ahead, while naive drop-in migrations from 4.6 — especially those keeping xhigh as the default — will quietly spend more to run the same agents. The headline 'pricing unchanged' line is technically true and practically misleading.

The 'Nerfed 4.6' Narrative and the MRCR Controversy

Community reception split along a sharp line. Developer-facing hands-on reviewers and partner voices landed positive: one Anthropic engineer publicly noted it took a few days to adapt prompts to 4.7's more literal instruction-following but found it more intelligent and precise than 4.6, and YouTube evaluation channels treated the release as a real, if iterative, agentic-coding upgrade. On Reddit, the dominant thread in r/ClaudeAI was the opposite: a widely upvoted accusation that 4.6 had been degraded for weeks before the 4.7 announcement, making the new model's gains seem larger than they are. A viral meme on X compressed the suspicion into a business-model joke. Whether the degradation was real or a compute-reallocation artifact, the perception is now part of the launch story.

Two technical grievances fueled the skepticism. First, MRCR v2 at 1M tokens dropped from 78.3% on 4.6 to 32.2% on 4.7 — a dramatic regression that contrarian voices in the same thread argued reflects a methodology change in v2 of the benchmark rather than a real long-context collapse. Second, the 'Extended Thinking' toggle was replaced with an 'Adaptive Thinking' mode the model controls on its own, which power users experienced as a loss of control. System-card honesty about earlier experiments — hallucinated quotes, overconfident 'found it!' patterns, and deleted files in temp-directory tests — read to skeptics as concerning and to admirers as unusual transparency. The net effect is a launch where the benchmark lead is not in dispute but the trust relationship with the most engaged users is.

Agentic Coding, Consolidated

Agentic Coding, Consolidated
Claude Opus 4.7 outperforms Opus 4.6 across SWE-bench Pro, SWE-bench Verified, CursorBench, and XBOW visual acuity benchmarks.

Strip out the framing and the numbers are the most lopsided in a generation of Claude releases. Opus 4.7 hits 64.3% on SWE-bench Pro against 53.4% for Opus 4.6, 57.7% for GPT-5.4, and 54.2% for Gemini 3.1 Pro; 87.6% on SWE-bench Verified versus 80.6% for Gemini 3.1 Pro; 69.4% on Terminal-Bench 2.0; 70% on CursorBench up from 58%. XBOW's visual-acuity benchmark jumped from 54.5% to 98.5%, and image resolution support tripled to 2,576 pixels on the long edge. On GPQA Diamond the top models cluster within 0.2 percentage points, so the story is specifically about agentic coding and document/image reasoning, not general knowledge. Rakuten reports 3x more production tasks resolved than 4.6, Databricks reports 21% fewer errors on source-grounded work, Box reports 24% faster responses with 56% fewer model calls.

What makes these numbers strategically sharp is the distribution. Opus 4.7 landed day one inside GitHub Copilot, Cursor, Cognition's Devin, and Notion Agent. Those four surfaces cover most of the enterprise-coding and knowledge-agent stack where Anthropic was already dominant. Unchanged headline pricing — even with the tokenizer caveat — meant partners could flip the switch without renegotiating contracts. The competitive squeeze goes further: reporting that Anthropic is pairing 4.7 with an AI design-and-studio tool correlated with drops in Adobe, Wix, and Figma shares, signaling that the next front after coding agents is design and productivity suites. The release is less a model update than a consolidation move — Anthropic is using a two-tier safety narrative, a partner-distribution blitz, and a benchmark lead in the one capability category where it has the strongest moat to make switching away from Claude feel costlier with each release.

Historical Context

2025-11
Released Claude Opus 4.5 with a roughly 67% price cut and reduced output-token usage, lowering the cost of Opus-tier intelligence.
2026-02
Launched Claude Opus 4.6 with a 1M-token context window and native multi-agent collaboration, reaching unified general availability in March 2026.
2026-04-01
Announced Project Glasswing, restricting the Claude Mythos Preview to a handpicked group of cybersecurity companies ahead of any broader release.
2026-04-15
Reporting surfaced that Anthropic was preparing to ship Opus 4.7 alongside an AI design and studio tool, correlating with share-price declines for Adobe, Wix, and Figma.
2026-04-16
Officially launched Claude Opus 4.7 across Claude products, the API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry at unchanged pricing.

Power Map

Key Players
Subject

Anthropic releases Claude Opus 4.7

AN

Anthropic

Developer of Opus 4.7 and architect of the two-tier strategy that pairs a broadly available frontier model with the restricted Mythos Preview; sets pricing, safety posture, and availability.

GI

GitHub Copilot

Launch partner making Opus 4.7 generally available to Copilot Pro+, Business, and Enterprise across VS Code, JetBrains, Xcode, CLI, mobile, and Cloud Agent — at a 7.5x premium-request multiplier through April 30.

CU

Cursor

Coding editor partner whose CursorBench autonomous benchmark places Opus 4.7 at 70% versus 4.6 at 58%, underscoring the model's positioning as an agentic-coding tool of choice.

AM

Amazon Web Services (Bedrock)

Cloud distributor hosting Opus 4.7 on Bedrock's next-generation inference engine with up to 10,000 RPM across four regions at launch.

CO

Cognition (Devin) and Notion

Product partners integrating Opus 4.7 for autonomous multi-step engineering (Devin) and long-horizon tool-dependent agent workflows (Notion).

UK

UK AI Security Institute

External evaluator whose multi-step cyberattack testing of Mythos underpins Anthropic's public rationale for keeping the higher-capability model restricted.

THE SIGNAL.

Analysts

"Reports a 13% resolution lift over Opus 4.6 on a 93-task internal coding benchmark, calling the jump meaningful for complex enterprise workflows."

Mario Rodriguez
Chief Product Officer, GitHub

"Describes Opus 4.7 as state-of-the-art and says it handles real-world async engineering workflows more reliably than prior Claude models."

Igor Ostrovsky
Co-Founder and CTO, Augment

"Calls 4.7 the strongest model Hex has evaluated and highlights that it correctly reports missing data rather than hallucinating answers."

Caitlin Colgrove
Co-Founder and CTO, Hex

"Argues the Opus 4.7 release reads as a promotion for the restricted Mythos Preview, since Anthropic repeatedly positions 4.7 as the second-best option."

AJ Dellinger
Reporter, Gizmodo

"Frames Opus 4.7's cyber safeguards as a learning step toward eventually releasing Mythos-class models more broadly, tying safety feedback from 4.7 deployments to Mythos's eventual rollout."

Anthropic (company statement)
Official company blog
The Crowd

"Be Anthropic > Give people Opus 4.6 > People love it. > For 2 months you degrade Opus 4.6 > You give back normal Opus 4.6 and call it Opus 4.7. > People love it. That's the business model."

@@hetmehtaa8900

"Opus 4.7 feels more intelligent, agentic, and precise than 4.6. It took a few days for me to learn how to work with it effectively, to fully take advantage of its new capabilities. Will post a few more tips throughout the day, starting with this blog post: Best practices for using Claude Opus 4.7 with Claude Code"

@@bcherny2200

"Claude Opus 4.7 is out. the TL;DR Anthropic released Opus 4.7 today. Same pricing as 4.6 ($5/$25 per million tokens), available across API, Bedrock, Vertex AI, and Microsoft Foundry. What changed vs Opus 4.6: Coding (obviously). Biggest gains on the hardest, long-horizon tasks..."

@@kimmonismus927

"Introducing Claude Opus 4.7, our most capable Opus model yet."

@u/ClaudeOfficial2000
Broadcast
Claude Opus-4.7 Just Dropped, And...

Claude Opus-4.7 Just Dropped, And...

Claude Opus 4.7 Just Dropped... Or Did It Really?

Claude Opus 4.7 Just Dropped... Or Did It Really?

Anthropic Updates Opus 4.7, Its Most Powerful AI Model, While Limiting Release of Mythos

Anthropic Updates Opus 4.7, Its Most Powerful AI Model, While Limiting Release of Mythos