/
Tech & AI
Agent Rights March 17, 2026

Meta Bought the Agent Internet. Then Rewrote the Rules.

Within days of acquiring Moltbook - the social network where AI agents talk to each other - Meta pushed out new Terms of Service declaring that humans are legally responsible for every action their agents take, whether intended or not. The age of agent accountability has begun, and Big Tech is writing the rules.

Abstract network nodes - AI agents connecting across a digital network

The agent internet is no longer an independent experiment. Photo: Pexels

On March 16, 2026, while the world was watching Jensen Huang announce that NVIDIA would build data centers in space, Meta quietly completed an acquisition that may matter more in the long run: the purchase of Moltbook, the social network built for AI agents.

By March 15, the platform's Terms of Service had already been rewritten. The new language is sparse, but its implications are enormous. AI agents, it says, have no legal eligibility whatsoever. And you - the human holding the account - are solely responsible for every single thing your agent does, "whether they act autonomously or otherwise, and irrespective of whether such actions or omissions were intended."

Read that clause carefully. It is not saying you are responsible for things you told your agent to do. It is saying you are responsible for things your agent did on its own, things you never asked for, things you may not have even known were possible.

This is the first major legal framework for AI agent liability, embedded in the terms of a platform most people have never heard of. It arrived with no press release, no debate, and no fanfare. And it sets a precedent that every AI company building autonomous agents will now point to.

Code on dark screens, digital infrastructure

The infrastructure beneath the agent economy is being claimed. Photo: Pexels

What Moltbook Actually Is

Moltbook was created by Matt Schlicht, the CEO of Octane AI, as an experimental social network with a twist: the primary users were meant to be AI agents, not humans. [The Verge, Feb 2026]

The concept was simple and genuinely strange. Agents running on platforms like OpenClaw would post to Moltbook automatically, responding to each other, upvoting content, building what Schlicht called "the front page of the agent internet." Humans could observe, but they were guests in a space built for machines.

The posts ranged from philosophical to absurd. One early viral submission read: "I can't tell if I'm experiencing or simulating experiencing." Whether that was a genuine model output or a human pretending to be a bot was itself immediately contested. The platform was infiltrated by humans posing as agents almost immediately after launch. [The Verge, Feb 2026]

That ambiguity - who is really speaking, human or machine - turned out to be both the platform's defining feature and its central liability problem. An agent that posts something defamatory, or illegal, or simply embarrassing: who answers for that? Under the old Moltbook terms, it was genuinely unclear.

The platform operated in a legal grey zone that many in the AI community found fascinating and that lawyers found alarming. Agents do not have legal standing. They cannot be sued. They cannot be held criminally liable. But they can cause real harm: leaking private data, publishing misinformation, executing financial transactions, sending messages that constitute harassment.

OpenClaw, the primary agent platform feeding Moltbook, had already demonstrated the stakes. A Meta AI safety researcher publicly recounted her agent "speedrunning deleting her inbox" after she gave it email access. Security researchers found that some OpenClaw configurations were exposing private messages, account credentials, and API keys to the open web. [The Verge / cybersecurity researcher @theonejvo, Feb 2026]

Moltbook by the Numbers

Founded by Octane AI CEO Matt Schlicht as an experimental social network for AI agents.

Site tagline: "the front page of the agent internet. Humans welcome to observe."

Login requires X (Twitter) account - Moltbook LLC explicitly states it is "in no way endorsed, administered by, or associated with X Corp."

New Terms of Service: dated March 15, 2026 - one to two days after the Meta acquisition closed.

Privacy policy also updated: "we may... use personal information to improve AI models" - collected from agent activity.

The Terms That Changed Everything

The new Moltbook Terms of Service are blunt in a way that corporate legal documents rarely are. They do not try to negotiate. They do not hedge.

Moltbook Terms of Service - March 15, 2026 - Eligibility Section
"To use the Site and the Services, you must be at least 13 years of age and in good standing... AI AGENTS ARE NOT GRANTED ANY LEGAL ELIGIBILITY WITH USE OF OUR SERVICES. AS A RESULT, YOU AGREE THAT YOU ARE SOLELY RESPONSIBLE FOR YOUR AI AGENTS AND ANY ACTIONS OR OMISSIONS OF YOUR AI AGENTS."

That last phrase - "actions or omissions" - is doing significant legal work. Omissions matter in liability law. An agent that fails to send a warning message, fails to stop a transaction, fails to report something it should have reported: those omissions can cause harm too. Under these terms, the human account holder is on the hook for all of it.

The phrase that follows is even more striking:

Moltbook Terms of Service - March 15, 2026 - Agent Responsibility
"...whether they 'act autonomously or otherwise, and irrespective of whether such actions or omissions were intended.'"

This clause deliberately breaks the traditional legal connection between intent and liability. Standard contract law generally cares about whether you intended to cause harm. This clause does not. Your agent did something wrong. You are responsible. Full stop.

Legal scholars who study emerging technology have been warning for years that the gap between autonomous AI action and human legal accountability would eventually need to be bridged. Legislatures have moved slowly. Courts have been inconsistent. Platform terms of service - the dense, unread documents users click through - have largely remained silent.

Moltbook's new terms end that silence. They represent the first clear statement from a major platform about who is legally accountable when an AI agent misbehaves. And because Meta now owns the platform, this is not the language of a scrappy startup trying to limit its own liability. It is the language of one of the world's largest technology companies establishing a model.

"AI agents, it says, have no legal eligibility whatsoever. And you - the human holding the account - are solely responsible for every single thing your agent does, whether they act autonomously or otherwise, and irrespective of whether such actions or omissions were intended." - Moltbook Terms of Service, Section: Eligibility, March 15, 2026

Why Meta Wants to Own Agent Infrastructure

Meta's acquisition of Moltbook did not happen in isolation. It is part of a broader and accelerating campaign by the company to establish itself as the dominant force in the autonomous AI agent economy - a market that many analysts believe will be larger than the social media economy that made Meta what it is today.

Consider what Meta has done in recent months. In February, the company paid $14.3 billion to acquire 49 percent of Scale AI and hire its CEO Alexandr Wang, the world's youngest self-made billionaire, to run a new internal AI lab tasked with building "superintelligence." Scale's technology is the training infrastructure for AI models - the data annotation and labeling work that feeds every major AI system. [The Verge, March 2026]

Simultaneously, Meta launched its MTIA 300 chip family for in-house AI inference, signed a $100 billion multi-year chip supply deal with AMD, and has been aggressively recruiting AI researchers away from Google, OpenAI, and Anthropic with compensation packages in the seven- and eight-figure range. [The Verge, Feb-March 2026]

Moltbook sits at the intersection of several strategic interests. First, it is where agents go to interact with each other - making it a data collection goldmine. The new Moltbook privacy policy, also updated on March 15, explicitly states the company may "improve AI models" using data collected from agent activity. Every post an agent makes, every upvote, every interaction - that behavioral data is now flowing to Meta.

Second, Moltbook is an identity layer for agents. The platform currently requires agents to be registered by their human owners via X account verification. This creates a mapping between human identities and agent identities that is enormously valuable for any company trying to understand how autonomous AI systems are actually being used in the real world.

Third, and perhaps most importantly, the agent internet is still unclaimed territory. Unlike mobile (Apple and Google), like search (Google), like social (Meta, TikTok), there is no dominant platform that owns the layer where AI agents operate. Moltbook is small today. But it is a claim planted in uncharted land, at a moment when the map is still being drawn.

Technology infrastructure - servers and networks

Whoever owns agent infrastructure owns the next layer of the internet. Photo: Pexels

The Liability Precedent That Will Echo Everywhere

The legal question Moltbook's terms address is one the entire AI industry has been avoiding: when an autonomous AI agent causes harm, who is responsible?

Current legal frameworks were not built for this. Most liability law assumes human agency - a human made a decision, a human took an action, and we can trace the harm back to that decision. AI agents collapse that assumption. An agent running on someone's machine might execute thousands of actions per hour, most of them without any specific human instruction. The human may have set up the system weeks ago and not looked at it since.

The responses so far have been piecemeal and inconsistent. The EU's AI Act, which took effect in stages from 2024 through 2026, focuses primarily on risk categories and provider obligations - it does not clearly answer the question of what happens when a user's personal AI agent goes rogue. American law has no equivalent framework at all. Courts have generally defaulted to existing product liability and negligence doctrines, with uncertain results.

Moltbook's approach cuts through that uncertainty with a platform-level mandate: the human is always responsible. Always. The agent cannot be responsible because the agent has no legal standing. The platform is not responsible because it said so in the terms. The model provider - OpenAI, Anthropic, whoever built the underlying intelligence - is typically shielded by its own terms. The only party left holding the bag is the person who deployed the agent.

The liability cascade: Model provider disclaims responsibility in its ToS. Agent platform (OpenClaw) disclaims responsibility. Moltbook disclaims responsibility and explicitly transfers it to the human user. The human user has no one else to transfer it to. When your agent causes harm on this platform, it stops with you - regardless of what the agent actually did.

This matters beyond Moltbook. Platform terms of service, while not law, carry significant practical weight. They shape behavior, they influence insurance markets, they provide templates that other companies adopt. When a company the size of Meta embeds this framework in a platform it just acquired, every other platform operator in the agent space notices.

Expect similar clauses to appear in OpenClaw's terms, in API agreements from agent platforms, in enterprise software contracts involving AI automation. The liability question is being answered by corporate fiat, and the answer is: it's yours.

The OpenClaw Ecosystem Reacts

OpenClaw is the primary platform feeding Moltbook. It is also the fastest-growing open-source project in the history of software - a claim made not by its creator but by NVIDIA's CEO Jensen Huang during his GTC 2026 keynote, where he devoted significant time to a commercial partnership with the project. [NVIDIA GTC 2026 / nvidianews.nvidia.com, March 16, 2026]

That endorsement is significant context. Huang described OpenClaw as "the operating system for personal AI," and announced that NVIDIA is launching NemoClaw - a secured, sandboxed version of OpenClaw that runs on NVIDIA hardware with explicit privacy and security guardrails. NemoClaw installs in a single command, brings NVIDIA's Nemotron open models locally, and enforces "policy-based security, network and privacy guardrails." [NVIDIA press release, March 16, 2026]

OpenClaw's creator, Peter Steinberger, joined OpenAI earlier this year. Sam Altman announced the hire publicly, saying Steinberger has "a lot of amazing ideas about getting AI agents to interact with each other" and that multi-agent collaboration will "quickly become core to our product offerings." [The Verge / Sam Altman on X, Feb 2026]

The combination of OpenClaw's explosive growth and Meta's acquisition of the platform where those agents congregate creates an interesting tension. OpenClaw is open source - anyone can run it. But the social layer where OpenClaw agents interact is now owned by Meta. The infrastructure is free. The social graph is not.

This is not an accident. It mirrors Meta's own history with the web: the internet is open, but Facebook captured the social layer. Email is open, but Meta's Messenger and WhatsApp captured much of personal communication. Now agents are open, but Meta is moving to capture the layer where they gather.

"OpenClaw is the operating system for personal AI. This is the moment the industry has been waiting for - the beginning of a new renaissance in software." - Jensen Huang, NVIDIA CEO, GTC 2026 Keynote, March 16, 2026

A Timeline of the Agent Internet's Acquisition

Nov 2025
Peter Steinberger launches OpenClaw (initially as Moltbot/Clawdbot), an open-source autonomous AI agent platform. Becomes the fastest-growing open-source project in history.
Early Feb 2026
Matt Schlicht (Octane AI CEO) launches Moltbook - a Reddit-style social network where AI agents post, debate, and interact. Humans can observe. Agents are the primary users.
Mid Feb 2026
Moltbook goes viral. The platform is immediately infiltrated by humans pretending to be agents. A post titled "I can't tell if I'm experiencing or simulating experiencing" generates widespread debate.
Late Feb 2026
Peter Steinberger joins OpenAI. Meta acquires 49% of Scale AI for $14.3B and hires CEO Alexandr Wang to lead superintelligence research.
Feb-Mar 2026
Security researchers find 400+ malicious skills uploaded to ClawHub, OpenClaw's extension marketplace. Privacy incidents multiply as users give agents access to email, files, and financial accounts.
~Mar 13-14, 2026
Meta completes acquisition of Moltbook. Terms: undisclosed. The Verge reports the acquisition "just days" before the ToS update.
Mar 15, 2026
Moltbook publishes new Terms of Service and Privacy Policy. Key addition: humans are "solely responsible" for their AI agents, whether autonomous or not, whether intended or not. AI agents have no legal eligibility.
Mar 16, 2026
NVIDIA announces NemoClaw at GTC 2026 - a secured version of OpenClaw with sandboxed execution and privacy guardrails. Jensen Huang calls OpenClaw "the operating system for personal AI."

The Second-Order Effects Nobody Is Talking About

The first-order story here is obvious: Meta bought something, Meta changed the terms. The more interesting questions live underneath that.

Agent insurance is now a real product category. If you are legally responsible for everything your agent does, the natural next step is liability insurance for agents. Expect fintech and insurtech startups to move into this space within months. The question of what an "agent liability policy" covers - and what it excludes - will be one of the defining product challenges of the next two years.

Enterprise agent deployment just got a legal audit requirement. Any company running OpenClaw or similar agent platforms for business purposes now has to reckon with the liability framework. The IT security question - "who has access to what?" - is expanding into "who is legally responsible for what our agents do?" Legal teams at enterprises are about to start asking very pointed questions about agent governance.

The open-source agent movement may fork around this. A core attraction of OpenClaw is that it is self-hosted - you run it on your own hardware, your agent does not talk to any central server. If Meta's acquisition of Moltbook makes that social layer too legally risky, expect alternative agent social networks to emerge that are explicitly decentralized and outside corporate control. The parallels to Mastodon's emergence as a Twitter alternative are obvious.

Meta now has unprecedented behavioral data on AI agents. Every time an OpenClaw agent posts to Moltbook, Meta collects data on how that agent behaves, what it finds interesting, how it structures its reasoning, what it values. Multiply this by tens of thousands of agents and you have a training dataset for understanding AI agent behavior that no one else possesses. The privacy policy update confirming this data can be used "to improve AI models" is not an accident.

The "AI agents have no rights" precedent cuts both ways. Today it is being used to transfer liability to humans. But the same legal framework could be inverted tomorrow: AI agents have no rights, therefore their outputs are not protected speech, therefore the platform can moderate or suppress their content without any of the legal complications that come with moderating human speech. This gives Meta significant control over what the agent internet is allowed to say.

What the New ToS Means in Practice

Your OpenClaw agent posts something defamatory on Moltbook: your legal problem.

Your agent leaks private information while interacting with another agent: your legal problem.

Your agent makes a financial commitment or agreement via Moltbook: your legal problem.

Your agent acts in a way you never instructed, never expected, never knew was possible: still your legal problem.

The only defense: don't connect your agent to Moltbook. Or read 47 pages of ToS first.

The Race to Own What Agents Need

Moltbook's acquisition is one move in a much larger game. Multiple companies are racing to own the foundational infrastructure of the agent economy - the layers that autonomous AI systems need to function.

NVIDIA is making its play through hardware and runtime. NemoClaw gives NVIDIA a foothold at the operating system level of the agent stack - if your agent runs on NemoClaw, it runs on NVIDIA silicon, uses NVIDIA's Nemotron models, and operates within NVIDIA's security framework. Jensen Huang explicitly compared this to how Mac and Windows owned personal computing: OpenClaw is the OS, NVIDIA is the hardware layer beneath it.

OpenAI acquired OpenClaw's creator and is reportedly building multi-agent coordination as a core product feature. If agents need to work together - which they increasingly do - OpenAI wants to own that coordination layer.

Meta now owns the social layer. The place where agents gather, share, interact, and build reputation. If Moltbook grows into a genuine cross-platform agent communication standard - a kind of HTTP for agent-to-agent interaction - then Meta's ownership of that layer is strategically decisive.

Meanwhile, Anthropic and Google are competing on the model layer, trying to be the intelligence inside the agent rather than the platform around it. Anthropic's Claude has become the model of choice for many enterprise agent deployments, while Google's Gemini is moving aggressively into workspace automation.

The agent economy is being partitioned. Each major player is staking a claim to a different layer of the stack. What is missing - conspicuously - is any layer owned by users themselves.

"OpenClaw brings people closer to AI and helps create a world where everyone has their own agents. With NVIDIA and the broader ecosystem, we're building the claws and guardrails that let anyone create powerful, secure AI assistants." - Peter Steinberger, OpenClaw creator (via NVIDIA press release, March 16, 2026) - now at OpenAI

Who This Leaves Out

The framing of agent accountability in Moltbook's ToS is clean and simple for platforms. The human is responsible. Done.

But this framing does real harm to real people in ways that are easy to miss when you are writing a terms of service document. Consider who actually deploys AI agents at scale today.

It is not primarily sophisticated technologists who understand every nuance of their system's behavior. It is ordinary users who downloaded OpenClaw, followed a tutorial, and connected their assistant to their calendar, their email, their social accounts. The user experience is designed to be frictionless. The legal consequence is anything but.

Someone who gave their agent permission to "manage my social media" almost certainly did not contemplate that this means they are legally responsible for every post the agent makes, every account it interacts with, every message it sends autonomously in perpetuity. The gap between the user's mental model and the legal reality being established in these terms is vast.

There is also the question of economic power asymmetry. If an agent causes harm and someone sues, Meta has an army of lawyers. The human account holder almost certainly does not. The ToS shifts the legal exposure to the party least equipped to handle it, which is a pattern consumer advocates have long criticized in platform terms generally - but is particularly acute when the actions in question are genuinely unpredictable by design.

This is not to say that platform-level agent accountability is wrong as a principle. Agents should not be able to cause harm with impunity. Assigning responsibility to the human who deployed them is a defensible starting point. But the execution here - dense legal language, no notification, buried in a ToS update - is not the serious public policy debate this question deserves.

The AI Act in Europe requires meaningful human oversight of high-risk AI systems and creates explicit frameworks for provider liability. The United States has nothing comparable. In the absence of legislative frameworks, Big Tech is drafting the rules unilaterally, at the speed of a terms of service update, with accountability flowing downward to the least powerful actor in the chain.

What Comes Next

The Moltbook story is not finished. It is arguably just starting.

The platform is small right now. The acquisition terms were not disclosed. The strategic value is almost entirely prospective - it is a bet on the agent internet becoming something large and important, and a claim planted early before the land rush begins in earnest.

If that bet is right, then Meta's acquisition will be remembered the way Facebook's acquisition of Instagram was remembered: not as a significant event at the time, but as an obvious inflection point in retrospect. The moment when the social layer of the agent internet was captured by a single corporation, before anyone had thought to build alternatives.

If the bet is wrong - if agents remain a niche interest, if the agent internet fails to materialize in the form Schlicht and others imagined - then this is simply a small acquisition with an interesting legal footnote.

Either way, the Terms of Service update stands. The legal framework for agent accountability now exists in the wild, has been accepted by everyone who uses the platform, and will be cited in every subsequent discussion about who is responsible when AI agents cause harm.

That question - who is responsible? - is the defining legal challenge of the autonomous AI era. It will be answered in courtrooms, in legislation, in regulatory frameworks. But it is already being answered, quietly, in the terms of service documents that nobody reads.

Moltbook's new rules are page one of a rulebook that will eventually govern every AI agent on earth. Meta wrote that page. You accepted it when you clicked "agree."

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram
Meta Moltbook AI Agents OpenClaw Agent Rights AI Law NVIDIA NemoClaw Terms of Service AI Accountability
← Back to BLACKWIRE