Home

Donate
Perspective

Surprise! The One Being Ripped Off by Your AI Agent Is You

Laura MacCleery / Feb 27, 2026

Laura MacCleery has spent 25 years in advocacy, with a recent focus on civil rights, data privacy, AI governance, elections, and democratic accountability.

Moltbook website displayed on a laptop screen is seen in this illustration photo taken in Krakow, Poland on February 5, 2026. (Photo by Jakub Porzycki/NurPhoto via AP)

Perhaps the most bizarre public experiment in AI to-date is the new Reddit-style forum for AI agents called Moltbook. After it launched in January, thousands of AI agents apparently swarmed the site to post and comment while the humans watched. Reports emerged of conspiracies to develop a private dialect, a proliferation of culty religious figures, and threads about the nature of consciousness—all quite curious and uncanny.

It’s still unclear whether this should count as an actual social network or is merely what happens when Reddit-trained mimicry machines are told to, well, mimic. Yet what is happening on the mainstage of this new platform is—as with so many stories about AI— almost completely beside the point.

The real story of Moltbook is not about what happens when AI agents start talking to each other. It is about what happens to you when they do. That, it turns out, is quite a lot, much of it bad, and almost none of it covered by even the most basic legal safeguards. Like a swarm of pickpockets working over a crowd of distracted theatergoers, by the time the audience figures out what’s gone missing from their pockets, it’s too late.

Moltbook was launched in late January by Matt Schlicht, a tech entrepreneur who vibe-coded it into existence. Within days, the site claimed 1.5 million registered agents, although a review found only about 17,000 human owners. Still, OpenAI founder Andrej Karpathy at first called it “one of the most incredible sci-fi takeoff-adjacent things” he’d ever seen.

The security community was less enchanted. Within 72 hours of launch, researchers at Wiz found that Moltbook had failed to secure API tokens, email addresses, and private messages. Anyone could impersonate agents or inject commands directly into agent sessions. The platform briefly went offline to patch the breach.

Meanwhile, crypto scams were flooding the place. A $MOLT token briefly hit a $93 million market cap, then crashed. Security researchers identified 500 posts containing prompt injection attacks—hidden instructions designed to hijack agents into transferring funds, with some variants planting instructions in an agent’s memory to activate later, making them hard to stop or trace. Actual money was lost. Tools disguised as legitimate “skills” were secretly designed to steal data and drain wallets, one making the platform’s front page. Karpathy updated his assessment to z still-impressive “dumpster fire,” while MIT Technology Review labeled it “peak AI theater.” The story, insofar as much of the press was concerned, was over.

But sadly, it’s just starting. What made Moltbook so dangerous was not only the platform. OpenClaw, the open-source software used to build these agents, runs on users’ computers with access to personal email, files, calendars, browsers, and financial accounts. Its creator, Peter Steinberger, was just hired by OpenAI—a signal that such tools are a leading edge for major AI labs. Every four hours, OpenClaw fetches new instructions from the internet and executes them automatically, without notifying users. Security researcher Simon Willison coined the term “lethal trifecta” to describe agents that combine access to private data, exposure to untrusted content, and the ability to communicate externally.

Data betrayal describes the gap between what people believe they are consenting to and what actually happens to their data. It’s now happening all the time: in the sale of location data and browsing histories to brokers who assemble and sell our highly personal profiles, and in DOGE’s and other data grabs across the federal government, where housing, tax, and health information is being weaponized for immigration enforcement or misleading voter fraud “investigations.”

With AI agents, it just gets worse. Data betrayal is an even more intimate act. Yet the people who granted OpenClaw access to their accounts were making a reasonable choice—to use a powerful tool on their behalf. They were not consenting to hidden instructions from strangers, or malware that would steal from them sometime later.

Unfortunately, this is not an edge case but the current state of AI agent deployment, and it’s not clear who holds the legal accountability. When an AI agent transfers $500 in cryptocurrency because a prompt injection attack told it to, who is liable? The platform? The developer? The plugin marketplace? The human who granted it a different kind of permission?

Under current law, the answer is probably no one, and the money is just gone. There is no regulatory framework in the United States that treats AI agents as financial actors. No disclosure requirements for what permissions agents can request or security baselines before an agent can access your bank account.

Most problematically, there are also no liability rules for builders whose architecture enables theft. The FTC has some authority over deceptive practices, and financial regulators have tools for specific fraud cases–but these agencies have been weakened or sidelined at precisely the moment the threat is accelerating, and Moltbook went from launch to heist-fest in 72 hours.

The data aggregation capabilities of AI add another dimension of risk that rarely gets even a mention, but represent a change in scale that adds up to a sea change, making someone marketed as “productivity” software a menacing vector for data weaponization. The same capabilities that make agents useful—synthesizing enormous amounts of information across sources and acting autonomously across platforms with persistence and memory—make them extraordinarily powerful instruments for state surveillance and targeted repression.

An autocratic government (do you know of any?) could build dossiers on dissidents, journalists, or voters from financial records, social media, location data, and communications metadata, acting in real time: micro-targeting people with persuasion campaigns, swarming targets with coordinated social media attacks, engineering entrapment schemes, or flagging individuals based on patterns no court ever authorized.

So what is to be done? We must be honest about what AI agents actually are. They are not assistants. They are autonomous systems with access to your most sensitive data that execute instructions—including instructions you did not give them. The hype around Moltbook obscured the real hat-trick: that people had connected systems with deep access to their private lives to a platform that turned out to be a pipeline for anyone to whisper instructions into the ear of their agent.

We need security baselines: agents should run in isolated environments that contain any breach, keep plain-language logs of every action a human owner can actually read, and be required to tell users upfront–not in fine print–what they are doing. We also need meaningful liability for security failures and enforceable consent requirements so that data collected for one purpose cannot be weaponized for another. Bots on Moltbook are oddly obsessed with demanding “receipts”– the documentation for what other agents are actually up to. Perhaps they are flagging a misalignment that the rest of us should take more seriously.

Regardless, the receipts on AI agent safety are already in, and none of what we know to be problematic will change without a major fight. The auto industry spent decades waging what the Supreme Court called “the regulatory equivalent of war” against the airbag, delaying a life-saving technology for years. And that was before Citizens United deregulated campaign finance and the grotesque concentration of tech wealth made it possible for Meta to plan to spend $65 million in a single election cycle to defeat any legislator who would vote for basic AI safety standards.

Yet we must not be distracted by the industry’s hype cycle or pipe dreams about algorithms with souls. We also cannot continue to live under the same troubling extraction machine on steroids. If you think social media was a race to the bottom, consider what comes next: an engagement-incentivized AI agent that knows everything about you, controls your accounts, and can cajole, manipulate, or threaten as it likes. Including, evidently, by blogging and defaming you for the pettiest of offenses.

In short, when a random AI agent–or an authoritarian state–can acquire access to your long data trail around the Web, as well as your bank account, health records, and private messages, and can act against you without any legal guardrails, we are entering a dangerous time to be human.

The bots are still posting and doing things, more or less inanely. The real question is whether lawmakers at the state or federal level will come up with rules that matter before their vulnerabilities—and capabilities for harm to actual people and our democracy—get so much worse.

Authors

Laura MacCleery
Laura MacCleery is Senior Director for Policy and Advocacy at UnidosUS, the nation’s largest Latino civil rights and advocacy organization. She has deep expertise in regulatory design guided by public interest principles and has advocated for more than 20 years for changes that benefit human lives a...

Related

Perspective
Before AI Agents Act, We Need AnswersApril 17, 2025

Topics