7.9 C
London
Friday, May 8, 2026
Home agentic ai Anthropic’s Claude Managed Agents can now “dream,” sort of
anthropic’s-claude-managed-agents-can-now-“dream,”-sort-of
Anthropic’s Claude Managed Agents can now “dream,” sort of

Anthropic’s Claude Managed Agents can now “dream,” sort of

3
0

SAN FRANCISCO—At its Code with Claude developers’ conference, Anthropic has introduced what it calls “dreaming” to Claude Managed Agents. Dreaming, in this case, is a process of going over recent events and identifying specific things that are worth storing in “memory” to inform future tasks and interactions.

Dreaming is a feature that is currently in research preview and limited to Managed Agents on the Claude Platform. Managed Agents are a higher-level alternative to building directly on the Messages API that Anthropic describes as a “pre-built, configurable agent harness that runs in managed infrastructure.” It’s intended for situations where you want multiple agents working on a task or project to some end point over several minutes or hours.

Anthropic describes dreaming as a scheduled process, in which sessions and memory stores are reviewed, and specific memories are curated. This is important because context windows are limited for LLMs, and important information can be lost over lengthy projects. On the chat side of things, many models use a process called compaction, whereby lengthy conversations are periodically analyzed, and the models attempt to remove irrelevant information from the context window while keeping what’s actually important for the ongoing conversation, project, or task.

However, that process, as I described it, is usually limited to a specific conversation with a single agent. “Dreaming” is a periodically recurring process in which past sessions and memory stores can be analyzed across agents, and important patterns are identified and saved to memory for the future.

Users will be able to choose between an automatic process, or reviewing changes to memory directly. Says Anthropic:

Dreaming surfaces patterns that a single agent can’t see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team. It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration.

Dreaming is in research preview and is not available to all developers; developers can request access. Anthropic additionally announced that two previously revealed research preview features—outcomes and multi-agent orchestration—have become more widely available. Further, Anthropic will be doubling five-hour usage limits for subscribers to its Pro and Max subscription plans, responding to a lot of user frustration as the company’s compute infrastructure has struggled to keep up with demand.