Anthropic is joining the increasingly crowded field of companies with AI agents that can take direct control of your local computer desktop. The company has announced that Claude Code (and its more casual user-oriented Claude Cowork) can now “point, click, and navigate what’s on your screen” to “open files, use the browser, and run dev tools automatically” when necessary to complete tasks.
When possible, Anthropic says Claude Code and Cowork will still prioritize using Connectors to directly access and control outside apps or data sources. When that connection isn’t available, though, those tools are now able to ask permission to “scroll, click to open, and explore as needed” on the machine itself to do what’s asked. This kind of direct control of the computer can also be initiated and managed remotely via Claude’s Dispatch tool as long as the target computer remains powered on.
The new feature is now available to Claude Pro and Max subscribers using MacOS in what Anthropic calls a “research preview.” That means the system “won’t always work perfectly” and will sometimes require a “second try” for complex tasks, Anthropic warns. Completing tasks via “computer use” also “takes much longer and is more error-prone” than performing the same task via Connectors, the company writes.
Pobody’s nerfect
Giving an admittedly imperfect and “error-prone” AI tool the ability to explore your computer desktop “as needed” could ring some justified security alarm bells for many users. That’s especially true given the widespread stories of security issues that have arisen when companies or individual users give AI agents access to sensitive resources.
Anthropic says it has safeguards in place to prevent common risks like prompt injection, and it will limit access to certain “off limits” apps (e.g., “investment and trading platforms, cryptocurrency”) by default.
Anthropic also notes on a support page that the model is trained to avoid “risky operations” such as moving or investing money, modifying files, scraping facial images, or inputting “sensitive data.” But the company also warns that such training safeguards “aren’t perfect” and “aren’t absolute,” meaning that “Claude may occasionally act outside these boundaries.”
Anthropic also warns that, when computer use is activated, Claude will be able to see anything visible on-screen, including “personal data, sensitive documents, or private information.” For all these reasons, the company recommends “starting with the apps you trust and not working with sensitive data” during this research preview stage.
Anthropic’s announcement comes just weeks after the rollout of Perplexity’s Personal Computer, Manus’s My Computer, and Nvidia’s NemoClaw, which similarly let their AI agents take direct control of the desktop. All of these corporate moves follow the viral spread of OpenClaw earlier in the year, which led OpenAI to hire OpenClaw creator Peter SteinBerger “to drive the next generation of personal agents.”







