Sunday, March 9, 2025
HomeAISignal President Meredith Whittaker calls out agentic AI as having profound security...

Signal President Meredith Whittaker calls out agentic AI as having profound security and privacy issues

Share


Signal President Meredith Whittaker warned Friday that agentic AI could come with a risk to user privacy.

Speaking on stage at the SXSW conference in Austin, Texas, the advocate for secure communications, referred to the use of AI agents as “putting your brain in a jar,” and cautioned that this new paradigm of computing — where AI performs tasks on users’ behalf — has a “profound issue” with both privacy and security.

Whittaker explained how AI agents are being marketed as a way to add value to your life by handling various online tasks for the user. For instance, AI agents would be able to take on tasks like looking up concerts, booking tickets, scheduling the event on your calendar, and messaging your friends that it’s booked.

“So we can just put our brain in a jar because the thing is doing that and we don’t have to touch it, right?,” Whittaker mused.

Then she explained the type of access the AI agent would need to perform these tasks, including access to our web browser and a way to drive it as well as access to our credit card information to pay for tickets, our calendar, and messaging app to send the text to your friends.

“It would need to be able to drive that [process] across our entire system with something that looks like root permission, accessing every single one of those databases — probably in the clear, because there’s no model to do that encrypted,” Whittaker warned.

“And if we’re talking about a sufficiently powerful … AI model that’s powering that, there’s no way that’s happening on device,” she continued. “That’s almost certainly being sent to a cloud server where it’s being processed and sent back. So there’s a profound issue with security and privacy that is haunting this hype around agents, and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services [and] muddying their data,” Whittaker concluded.

If a messaging app like Signal were to integrate with AI agents, it would undermine the privacy of your messages, she said. The agent has to access the app to text your friends and also pull data back to summarize those texts.

Her comments followed remarks she made earlier during the panel on how the AI industry had been built on a surveillance model with mass data collection. She said that the “bigger is better AI paradigm” — meaning the more data, the better — had potential consequences that she didn’t think were good.

With agentic AI, Whittaker warned we’d further undermine privacy and security in the name of a “magic genie bot that’s going to take care of the exigencies of life,” she concluded.

Popular

Related Articles

Tammy Nam joins AI-powered ad startup Creatopy as CEO

Creatopy, a startup that uses AI to automate the creation of digital ads,...

Judge allows authors AI copyright lawsuit against Meta to move forward

A federal judge is allowing an AI-related copyright lawsuit against Meta to move...

Google scrubs mentions of diversity and equity from responsible AI team webpage

Google has quietly updated the webpage for its Responsible AI and Human Centered...

New DOJ proposal still calls for Google to divest Chrome, but allows for AI investments

The US Department of Justice is still calling for Google to sell its...
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x