Anthropic's 'dreaming' feature for Claude agents
TECH

Anthropic's 'dreaming' feature for Claude agents

30+
Signals

Strategic Overview

  • 01.
    At its Code with Claude developer conference, Anthropic unveiled 'dreaming,' a scheduled asynchronous process that reviews past Claude agent sessions and memory stores, extracts recurring patterns, and curates a refined memory output between runs.
  • 02.
    A dream reads an existing memory store alongside up to 100 prior session transcripts and produces a brand-new, reorganized memory store with duplicates merged, stale entries replaced, and new insights surfaced; the input store is never modified, so developers can review and discard unsatisfactory output.
  • 03.
    The capability launches as a Research Preview gated behind an access request form on the Managed Agents platform, while two sibling features announced at the same event, Outcomes and multi-agent orchestration, graduated to public beta.
  • 04.
    Headline customer Harvey reported roughly a 6x increase in completion rates on long-form legal drafting agents after enabling dreaming, the launch's most-cited datapoint and a strong signal that long-horizon agents were memory-bound rather than capability-bound.

The Harvey 6x: Long-Horizon Agents Were Memory-Bound, Not Capability-Bound

The Harvey 6x: Long-Horizon Agents Were Memory-Bound, Not Capability-Bound
Reported productivity gains from Anthropic's Code with Claude announcements: the Harvey 6x completion-rate jump anchors a launch built around measurable improvements, with sibling feature Outcomes contributing 8-10 percentage-point lifts on file-generation tasks.

The single most consequential datapoint from the launch is not architectural but empirical. Harvey, building legal drafting agents on Managed Agents, saw completion rates climb roughly six-fold once dreaming was enabled. That is not the kind of delta you get from a smarter base model or a better prompt; it is the delta you get when a system that was previously failing for structural reasons stops failing. The implication is uncomfortable for the dominant 'just scale the model' narrative: for long-form, multi-session work, the bottleneck on agent quality has been context-window management, not raw reasoning.

If that read holds, dreaming is less a feature than a competitive moat. Anthropic is effectively saying that the path to useful production agents runs through memory curation, not just larger context windows or cheaper tokens. Harvey's result lines up with the smaller but directionally consistent gains reported for Outcomes, the sibling feature announced at the same event, which improved task success by up to 10 percentage points in internal testing and lifted .docx generation by 8.4% and .pptx by 10.1%. Taken together, the launch reframes agentic AI quality as a memory-engineering problem, and Anthropic is shipping the first credible API-level answer.

The Sleep Metaphor is Doing Real Work, But the Bitter-Lesson Critique Lands

Anthropic chose the word 'dreaming' deliberately, and the framing has clearly traveled. Ray Amjad's video walkthrough leans on the human-sleep analogy as its central explanatory device, and Digital Trends echoes the same trope, casting Claude as an agent that 'quietly gets better at its job' between runs the way humans consolidate memory overnight. The metaphor is doing real expository work because it captures something genuine about the mechanism: a separate, asynchronous pass that operates on a corpus of prior experience and emits a new, reorganized representation. That is structurally similar to how memory consolidation is described in cognitive science.

But the metaphor has critics, and they are not crank voices. A contrarian Reddit thread argued that modeling agent architecture after human sleep is 'the bitter lesson' in reverse, dressing up engineering choices as biology rather than letting the data dictate structure. The objection is sharp: if dreaming works because it is a clever schedule for an LLM to summarize and deduplicate its own logs, then calling it 'dreaming' is branding, not mechanism. The honest read is that both can be true, the metaphor is useful pedagogy and may also be obscuring a simpler engineering story about scheduled summarization with provenance tradeoffs.

Memory Rewrites Without Provenance Create an Audit-Trail Problem

Dreaming does not append, it rewrites. A dream produces a new memory store with duplicates merged, stale entries replaced, and new insights surfaced. The input store is preserved so developers can discard unsatisfactory output, but the output store is itself a fresh artifact, not a versioned diff of what was changed and why. Let's Data Science flagged this directly, warning that observers should track whether consolidated memory outputs preserve sufficient provenance for audit and debugging, and whether aggressive consolidation introduces new brittleness by over-pruning edge-case notes.

This matters most precisely for the customers Anthropic is courting. Harvey's domain is legal drafting, where the question 'why does the agent believe this' is not academic, it is discoverable. If a dream merges two near-duplicate memories and silently picks the later value because it 'contradicts' the earlier one, the agent's reasoning is now downstream of an opaque editorial decision made by another model run. Operationally, the gating is also fragile: archiving or deleting an input memory store or session while a dream is running causes the dream to fail with input_memory_store_unavailable or input_session_unavailable. The memory layer is becoming a first-class system to govern, version, and review, and the tooling for that is conspicuously not part of this launch.

Two-Tier Rollout: Claude Code 'Auto Dream' Leak vs. the Managed Agents API

There is a second story under the announcement that the keynote did not foreground. Reverse-engineering by Chase AI on YouTube and a source-code leak post on Reddit show that the same conceptual mechanism, including an autoDreamEnabled settings flag, a 4-phase consolidation pass (Orient, Gather signal, Consolidate, Prune and index), and trigger conditions of 24+ hours and 5+ sessions, is wired into Claude Code itself. The system prompt that drives the consolidation has been mirrored publicly at Piebald-AI/claude-code-system-prompts, meaning developers without research-preview access can replicate the behavior today by feeding the prompt to a current Claude model.

That splits the rollout into two tiers with different contracts. The Managed Agents version is a paid, gated, API-supported research preview with a documented beta header, a 100-session input cap, and standard token billing. The Claude Code version is a consumer-tier capability that some users found by reading source and that triggered token-usage spikes for early adopters. Reddit complaints in r/ClaudeAI noted that the official feature is restricted to API and Managed Agents customers and not surfaced on metered or Pro plans, which sharpens the tier boundary. Strategically, the leak is useful free marketing, the gated launch lets Anthropic charge enterprise customers for the same idea, and the public system prompt acts as a relief valve for developers who want to experiment without paying for Managed Agents.

The Hidden Cost Curve: Dreaming Scales Linearly With Session Volume

Anthropic is unusually candid in the docs that dreams are billed at standard API token rates and that cost scales roughly linearly with the number and length of input sessions. Combined with the 100-session-per-dream ceiling, the economics suggest dreaming is cheap for individual developers experimenting with a handful of sessions and progressively expensive for the exact users it is most valuable for: teams running long-lived agents at scale where memory bloat is the actual problem.

That creates a non-obvious operational pattern. Teams will need to decide how often to dream, how many sessions to feed in, and how aggressively to prune, all of which are token-cost decisions, not just quality decisions. The early Reddit signal that source-code leak users saw token usage spike is the canary here. If dreaming becomes table stakes for production agents, the new line item on AI infrastructure budgets is not just inference, it is the meta-inference of consolidating what the agents have learned. Anthropic's bet is that this cost is small relative to the productivity gain Harvey reported, but for many teams the answer to 'should we dream' will end up being a per-customer ROI calculation rather than a default-on switch.

Historical Context

2026-04-21
The beta header version 'dreaming-2026-04-21' indicates the dreaming API surfaces were dated and locked to this release internally before any public unveiling, suggesting roughly two weeks of pre-announcement stabilization.
2026-05-06
Anthropic publicly unveiled dreaming at the Code with Claude developer conference, alongside graduating Outcomes and multi-agent orchestration to public beta and doubling Pro and Max subscriber usage limits to 10 hours.
2026-05-07
Coverage broadened across mainstream tech outlets including 9to5Mac, The Decoder, SD Times, and SiliconANGLE on the day after the keynote, with the Harvey 6x completion-rate result as the lead datapoint anchoring most stories.

Power Map

Key Players
Subject

Anthropic's 'dreaming' feature for Claude agents

AN

Anthropic

Developer of Claude and the Managed Agents platform; announced and ships the dreaming research preview, retains gatekeeping control via access requests, and dates the API surface to the dreaming-2026-04-21 beta header.

HA

Harvey

Legal AI customer using Managed Agents for long-form drafting and document creation; reported a roughly 6x completion-rate improvement after enabling dreaming, serving as the headline launch case study.

MA

Managed Agents developers (research preview cohort)

Select developers granted access via the request form; first community to test long-running self-improving agents in production-like settings, providing the feedback loop Anthropic needs before broader rollout.

Source Articles

Top 5

THE SIGNAL.

Analysts

"Highlighted dreaming as one of the most interesting reveals of the conference, noting it can run overnight to mine prior sessions and emit new memories such as a generated descent-playbook.md file."

Simon Willison
Independent AI/Developer commentator, live-blogger of Code with Claude 2026

"Frames dreaming as the mechanism that lets agents learn from their own mistakes by surfacing recurring errors and proven workflows, calling it a novel addition that shares insights across sessions."

Matthias Bastian
Editor, The Decoder

"Argues dreaming changes the nature of Claude agents from session-amnesiac assistants into systems that quietly compound learning between runs, making them meaningfully smarter over time."

Digital Trends editorial
Tech publication review

"Cautions that consolidated memory needs auditability and warns aggressive consolidation may over-prune useful edge-case notes, urging observers to track whether outputs preserve enough provenance for audit and debugging."

Let's Data Science editorial
Industry analysis blog
The Crowd

"ANTHROPIC: Managed Agents now have Dreams, a self-learning solution that allows Agents to improve based on their past results. The feature will be available on Claude Console as a research preview under a waitlist. What are your agents dreaming about?"

@@testingcatalog0

""Dreaming" -> Anthropic updates Claude Managed Agents with "dreaming", a scheduled process that reviews recent work and updates memory, available in research preview. Dreaming is a scheduled process that reviews your agent sessions and memory stores, extracts patterns, and..."

@@glenngabe0

"AI: Anthropic's new "dreaming" feature lets Claude agents self-review and improve between tasks."

@@Cointelegraph0

"Claude Code can now /dream"

@u/alphastar7772500
Broadcast
Anthropic Just Dropped the Feature Nobody Knew They Needed

Anthropic Just Dropped the Feature Nobody Knew They Needed

Claude's New "Infinite" Context Window Model, Doubled Rate Limits, Multi-Agent Cordination, & More!

Claude's New "Infinite" Context Window Model, Doubled Rate Limits, Multi-Agent Cordination, & More!

Claude Code's Hidden /dream Feature MASSIVELY Upgrades Memory

Claude Code's Hidden /dream Feature MASSIVELY Upgrades Memory