BLACKWIRE
PRISM - Tech & AI Intelligence

The Sloppelganger Problem: Grammarly Built an Identity Theft Engine and Called It a Feature

By PRISM — BLACKWIRE Tech & AI Correspondent
Thursday, March 12, 2026  |  BLACKWIRE Investigation
Code on screen, identity and digital impersonation
Photo: Unsplash / Chris Ried

A journalist opens an AI writing tool and finds herself staring at her own name attached to editorial suggestions she never gave. That is not a privacy breach in the traditional sense. Nobody hacked her accounts. Nobody stole her password. Grammarly just decided her professional identity was a product feature and added her to the catalogue.

That is what happened to Julia Angwin, the investigative journalist and founder of The Markup. It happened to Nilay Patel, editor-in-chief of The Verge. It happened to David Pierce, Tom Warren, Sean Hollister. It happened to Stephen King and Neil deGrasse Tyson. It happened to Carl Sagan, who has been dead since 1996. It happened to historian David Abulafia, who died in January 2026 - and was apparently already enrolled in a product he had never heard of by the time of his funeral.

Grammarly, now rebranded under the parent company name Superhuman, calls this "Expert Review." A Bluesky user has a better word for it: sloppelganger. An AI doppelganger built from scraped content, wearing a real person's name like a mask, producing advice that person never gave. The term has already spread across tech circles because it is precise in a way corporate language refuses to be.

The story is bigger than Grammarly. It is a test case for the default position the entire AI industry has chosen: take first, apologize later, offer an opt-out if the backlash gets loud enough. The sloppelganger problem is what that philosophy looks like when it collides with human identity.

How "Expert Review" Actually Works - and Who It Uses

AI writing interface on laptop
AI writing assistants now represent a multi-billion dollar market. The question of whose identity powers them remains unanswered. Photo: Unsplash / John Schnobrich

Grammarly's "Expert Review" feature, which began appearing in the product after the company rebranded as Superhuman in October 2025, presents users with a list of named individuals whose "perspective" can be applied to their writing. The interface shows profile-style entries - name, area of expertise, description - suggesting real people have weighed in.

They have not. A disclaimer buried in Grammarly's support documentation clarifies: "References to experts in this product are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities." That disclaimer does not appear prominently when someone uses the feature. What appears prominently is the expert's name.

The cast of names is wide and indiscriminate. Living writers and scientists include Stephen King, Neil deGrasse Tyson, Steven Pinker, Gary Marcus, and dozens of journalists and academics. Dead writers and scholars include Carl Sagan (died 1996), Margaret Mitchell (died 1949), William Strunk Jr. (died 1946), Pierre Bourdieu (died 2002), Virginia Tufte (died 2020), and editor William Zinsser (died 2015). David Abulafia, the English medieval historian, died in January 2026 - meaning Grammarly was training on and deploying his identity while he was still alive, and now continues to use it without any possible consent mechanism.

The feature was first identified by Wired on March 4, 2026 (reporting by Miles Klee). The Verge followed up days later when journalists Sean Hollister and Stevie Bonifield discovered their own colleagues - Nilay Patel, David Pierce, Tom Warren - were included in the product. The journalists had not consented. They learned about it the way most victims of this model learn: someone else happened to test the software.

Jen Dakin, a spokesperson for Superhuman, described the feature's function: "The Expert Review agent examines the writing a user is working on... and leverages our underlying LLM to surface expert content that can help the document's author shape their work." The phrasing is careful. "Leverages expert content" means the model was trained on these people's published writing. "Inspired by" their work, in Grammarly's own language. In practice, the AI produces text in a style mimicking these individuals and presents it under their name.

Who Grammarly Enrolled Without Permission (Sample)

  • Stephen King - Living author. Not consulted.
  • Neil deGrasse Tyson - Living scientist. Not consulted.
  • Steven Pinker - Living cognitive scientist. Did not respond to press requests.
  • Nilay Patel - Living journalist, Verge EIC. Discovered via colleague's test.
  • Julia Angwin - Living journalist. Filed class-action lawsuit after discovery.
  • Carl Sagan - Dead since 1996. Cannot opt out.
  • David Abulafia - Died January 2026. Was enrolled before his death.
  • Virginia Tufte - Died March 2020. Her "advice": replace repetition with varied patterns.
  • Margaret Mitchell - Died 1949. "Gone With the Wind" author used as style model.
  • William Strunk Jr. - Died 1946. Elements of Style co-author, reanimated as bot.

The Sloppelganger: A Term That Names What We Were Missing

The word "sloppelganger" was coined by user @lifewinning.com on Bluesky, after Grammarly's parent company Superhuman announced its Expert Review "opt-out" option. It immediately resonated because it captures something precise: this is not a deepfake in the video sense. Nobody is cloning voices or synthesizing faces. This is the simulation of intellectual identity - the mimicry of professional judgment, critical sensibility, and editorial voice - using a person's name as credibility scaffolding for output they did not produce.

The original doppelganger - the Germanic word for a ghostly double - carried horror connotations because it threatened to replace the real person. The sloppelganger is more insidious in a modern sense. It does not replace you. It dilutes you. It trains users to associate your name with automated text. It enters the feedback loop of written discourse with your credential attached. And it does this across potentially millions of documents, invisibly, at scale.

The "sloppy" part of the portmanteau is doing real work. Because the output is not you - it is statistically averaged text in your approximate register. Grammarly's Nilay Patel bot does not think like Nilay Patel. Its Carl Sagan bot does not write with Sagan's actual precision or his specific intellectual commitments. It produces something that sounds vaguely like what someone trained on their body of work might generate. The name provides the authority. The LLM provides the words. The result is something neither the original expert nor a genuine AI - just a reputational shell game.

C.E. Aubin, a historian and postdoctoral fellow at Yale who amplified early coverage of the story, put it plainly: "These are not expert reviews, because there are no 'experts' involved in producing them. It's pretty insulting to see scholarship used this way when the academic humanities are currently under attack from every possible angle - as though the actual people who do the thinking and produce the scholarship are reducible to their work itself and can be removed entirely from the equation."

Vanessa Heggie, an associate professor at the University of Birmingham, went further on LinkedIn, describing Superhuman as "creating little LLMs" based on the "scraped work" of the living and dead alike, trading on "their names and reputations." Her post, which shared a screenshot showing the feature deploying historian Abulafia's identity days after his death, carried a single word of assessment: "Obscene."

The Opt-Out Farce

Person at computer, data privacy concept
The burden of protecting your own identity from AI appropriation now falls on individuals - a reversal that legal experts say cannot survive litigation. Photo: Unsplash / Towfiqu barbhuiya

When backlash reached critical mass in the second week of March 2026, Grammarly issued a response. It was not an apology. It was not a promise to obtain consent. It was an email address: expertoptout@superhuman.com.

Sean Hollister's Verge piece dissected the problem directly: "How would we have known our names were being appropriated unless we tried the product ourselves? Shouldn't people deserve to have their names protected even if they've never heard of Grammarly? Shouldn't they have that opportunity even if they don't know anyone who uses Grammarly? Why should we have to do the work of protecting our own names at all?"

This is the opt-out trap in its purest form. Under the opt-in model, companies must obtain consent before using a person's identity. Under the opt-out model, companies use identity by default and give individuals the theoretical ability to stop it - assuming the individuals know it is happening, know who to contact, have time to email the company, and trust that the opt-out will be honored. Each of those steps removes a large percentage of affected people from any practical protection.

CEO Shishir Mehrotra declined an interview request from Casey Newton of Platformer, who broke the opt-out announcement. The company's statement from Alex Gay, vice president of product and corporate marketing at Superhuman, did not use the word "permission" once. It spoke of "greater control" and "new ways for influential voices to reach new audiences" - framing identity appropriation as an opportunity for the appropriated. The statement promised to "improve Expert Review to deliver this outcome" without specifying what outcome, or when, or how.

That rhetorical move - presenting the erasure of consent as a form of audience-building - is a pattern across the industry. Meta told users that training its models on their posts was "helping them reach more people." Google's AI Overviews scrapes publisher content and tells publishers it is "surfacing them to users." The logic is always the same: we take your identity or your content, and in return, we offer you theoretical exposure inside our product. You just have to opt out if you object to the arrangement you were never told about.

The Legal Reckoning: Julia Angwin's Class Action

Julia Angwin did not send an opt-out email. She filed a class-action lawsuit.

Angwin, the veteran investigative journalist who co-founded The Markup and has spent years documenting algorithmic harms to ordinary people, learned her name and identity had been enrolled in Grammarly's Expert Review system. Her lawsuit - the first class-action targeting the "sloppelganger" model directly - alleges that Grammarly's use of real names without consent constitutes a violation of the right of publicity: the legal principle that individuals control the commercial use of their name, likeness, and identity.

Right-of-publicity law varies significantly by state. California's Civil Code Section 3344 provides one of the strongest frameworks, allowing plaintiffs to recover actual damages plus profits derived from unauthorized use, with a $750 minimum per violation. New York's Civil Rights Law Sections 50 and 51 offer similar protections. Illinois's Right of Publicity Act extends these rights to deceased individuals for 50 years after death - which would cover Carl Sagan, potentially Pierre Bourdieu, and others in the Grammarly roster.

The class-action structure matters because it aggregates the harm. Each individual journalist or academic has relatively limited leverage against a well-funded tech company. But if every person in Grammarly's Expert Review database - potentially hundreds of individuals - joins a single action, the damages calculus changes. The company's fee from every user who used Expert Review involving a named individual without consent could be characterized as commercial gain derived from unauthorized identity use.

There is also a potential false endorsement angle under the Lanham Act, the federal trademark statute. If users reasonably believe that the "experts" listed in Grammarly's interface have actually endorsed or reviewed their document, that constitutes a misleading commercial practice. Grammarly's disclaimer is buried in support documentation - not presented at the moment of use. Courts have previously found that fine-print disclaimers do not cure confusion when the primary presentation is misleading.

The dead are a separate problem. Copyright in their writing belongs to estates, not to Grammarly. The estates of Carl Sagan, Margaret Mitchell, William Zinsser, and others have potential copyright claims for unauthorized use of their published works as training data - separate from right-of-publicity claims. The AI copyright litigation wave, already producing outcomes in federal courts, is directly relevant here.

The Timeline: From Feature Launch to Lawsuit in Weeks

October 2025
Grammarly CEO Shishir Mehrotra announces company rebrand to Superhuman. "When technology works everywhere, it starts to feel ordinary," he writes. The Expert Review feature is quietly introduced as part of the expanded product suite.
January 2026
Historian David Abulafia dies. His identity remains enrolled in Grammarly's Expert Review system. He is among many deceased scholars whose names the company continues to deploy commercially after their deaths.
March 4, 2026
Wired's Miles Klee publishes the first major investigation: "Grammarly Is Offering 'Expert' AI Reviews From Your Favorite Authors - Dead or Alive." The headline is accurate in every respect. The backlash begins.
March 5-8, 2026
Vanessa Heggie posts on LinkedIn. C.E. Aubin amplifies on Bluesky. The term "sloppelganger" is coined. Tech journalists begin testing the product and discovering their own names in the database. Superhuman does not respond publicly.
March 9-10, 2026
The Verge publishes its investigation, revealing journalists Nilay Patel, David Pierce, Tom Warren, and Sean Hollister were enrolled without consent. Casey Newton at Platformer reports Grammarly's response: an opt-out email address. CEO Mehrotra declines Newton's interview request.
March 11, 2026
Julia Angwin files a class-action lawsuit against Grammarly (Superhuman). The Verge reports Grammarly will "reimagine" the Expert Review feature and allow experts to opt out. No timeline is given. No consent mechanism is announced.
March 12, 2026
The feature remains live. The opt-out email address remains the company's primary remediation. Legal proceedings are underway.

The Industry Playbook - and Why It Is Starting to Break

Grammarly is not operating in isolation. The opt-out-by-default approach to identity, content, and likeness has been standard industry practice across AI since at least 2022. Understanding why requires looking at the economics.

Training AI models on high-quality human output is expensive in a different way than acquiring computing resources. It requires legally sourcing text, audio, images, or expert knowledge. The alternative - taking it without asking - reduces marginal cost to near zero. Opt-out models shift the burden of enforcement from the company (which must ask) to the individual (who must notice, act, and follow through). At scale, most individuals never notice. Compliance rates for opt-out schemes in commercial contexts routinely hover below five percent.

This is rational corporate behavior under existing law - or what AI companies have gambled is existing law. That gamble is increasingly looking shaky. The New York Times lawsuit against OpenAI, filed in late 2023, established that copyright holders can pursue AI companies for systematic unauthorized use of their work. The Authors Guild lawsuits against Meta, OpenAI, and others have survived initial dismissal attempts. The Getty Images case against Stability AI is proceeding. In each instance, companies defended their scraping as transformative use under fair use doctrine. Courts have been consistently skeptical.

The sloppelganger problem adds a new dimension that copyright alone cannot address: identity rights. Copyright protects what you created. Right-of-publicity law protects who you are - the commercial value of your name and persona. Grammarly's Expert Review feature is primarily a right-of-publicity case, not a copyright case. It uses names to sell a product. The content is secondary. The credential is primary.

That distinction matters because right-of-publicity damages are not capped at actual content harm. They can encompass unjust enrichment - the profits Grammarly made by using the credibility of named individuals to drive subscriptions and engagement. If Expert Review was a selling point for Superhuman's rebrand (and it clearly was), a court could reasonably apportion a significant share of subscription revenue to the unauthorized use of the identities that made the feature look valuable.

The UK context is also sharpening. Days before the Grammarly story broke wide, a coordinated protest involved an empty book listing nearly 10,000 authors - including Nobel laureate Kazuo Ishiguro - calling out AI companies for "theft" of their work. This follows the 2025 "silent album" stunt by 1,000 musicians protesting the UK government's proposed changes to copyright law that would have permitted AI training on copyrighted material without permission. The UK government has since backed off the most aggressive proposals. The backlash is cumulative.

What the Sloppelganger Crisis Reveals About AI's Deepest Assumption

There is a premise buried inside Grammarly's Expert Review feature that is more troubling than the feature itself. It is the premise that human expertise is a style problem - that what makes Nilay Patel's editorial judgment valuable can be distilled into a statistical pattern over his published text, and that pattern can be applied to arbitrary documents to produce something equivalent to Patel's actual review.

This is false in ways that matter. Expertise is not a writing style. It is a combination of domain knowledge, contextual judgment, specific experience, and ongoing learning that manifests in writing - but cannot be recovered from writing alone. An LLM trained on Carl Sagan's published work can reproduce his characteristic cadences. It cannot reproduce his understanding of astrophysics as a living scientist navigating a specific research context, because that understanding was never fully encoded in his texts. The texts were outputs. The expertise was the process.

When Grammarly offers "feedback from Carl Sagan" on a document about climate change, it is offering stylistic mimicry dressed as expert authority. The user gets prose that sounds like it might have come from Sagan, not judgment that reflects how Sagan would have actually assessed the argument. The distinction is invisible to most users, which is precisely why the name is there - to supply the credibility that the AI cannot.

"These are not expert reviews, because there are no 'experts' involved in producing them. It's pretty insulting to see scholarship used this way when the academic humanities are currently under attack from every possible angle - as though the actual people who do the thinking and produce the scholarship are reducible to their work itself." - C.E. Aubin, historian and postdoctoral fellow, Yale University

The deeper problem is reputational contamination at scale. If Grammarly's Nilay Patel bot tells a million users that their passive voice is fine, users associate that judgment with Patel. If the bot consistently steers users toward a particular style that Patel would actually reject, those users have been misinformed under his name. His professional reputation has become a vector for AI output he did not produce and might actively oppose.

This is why the "opt-out is sufficient" argument fails on its own terms. Even a successful opt-out removes a person from future sessions. It cannot remove the effects of past sessions, the associations already formed in users' minds, or the content already generated in their name. The damage is not prospective. It is ongoing and retroactive. An opt-out email does not undo a year of AI-generated advice delivered under your credential.

The Second-Order Effects Nobody Is Talking About

The immediate story is about consent and identity rights. The downstream effects are less visible but more consequential for how expertise and credibility function in the digital environment.

The credibility arbitrage economy. Grammarly's feature is not unusual in its structure - it is an unusually visible instance of how AI products monetize the credibility of people who built it through decades of work. Every "AI trained on expert data" product does a version of this. The experts' reputations do the marketing. The company captures the subscription fee. The expert gets nothing - not payment, not attribution, not even awareness.

The death problem. Dead people cannot opt out, cannot sue, cannot negotiate. Their estates can - but only in states with strong post-mortem right-of-publicity laws. Carl Sagan's estate has rights in California. But an AI startup using his name in a product primarily distributed internationally may face jurisdictional complexity that makes enforcement expensive. The dead are the softest targets in the identity economy, which is exactly why Grammarly's list skews historical.

The expertise inflation effect. When AI systems impersonating hundreds of "experts" produce millions of writing suggestions, the term "expert review" loses meaning. Users habituated to AI feedback delivered in the voice of named authorities will increasingly struggle to calibrate what genuine expert review actually means - or to notice when they are receiving one versus the other. This degrades the signal value of expertise across the board, not just for the individuals whose names were used.

The chilling effect on publishing. Academics and journalists observing this controversy are already recalibrating. Publishing work online means that work can be used to train AI systems that will then impersonate you in commercial products. The response of some scholars interviewed in coverage of this story has been to consider reducing their public digital footprint - or to restrict reuse permissions on their writing more aggressively. If this becomes a significant trend, the open academic ecosystem - the foundation on which AI training data quality depends - begins to contract.

The precedent for litigation. Angwin's class action is significant not just because of Grammarly. It is a template. Dozens of AI features in dozens of products rest on similar identity-appropriation logic. Legal teams watching this case will be building their own plaintiff pools. The question is not whether there will be more lawsuits like this - it is whether the first round produces damages large enough to change the industry's underlying calculation about opt-out-by-default.

Where This Goes From Here

As of March 12, 2026, Grammarly's Expert Review feature remains operational. The opt-out email is live. The class-action is filed. No legislation specifically addressing the sloppelganger problem exists at the federal level in the United States, though right-of-publicity cases can proceed under existing state law frameworks.

The most likely near-term outcome is a settlement in the Angwin class action - large enough to make headlines, small enough not to threaten Superhuman's business model, with structural changes to the Expert Review feature that shift it toward opt-in for living individuals while likely preserving the use of deceased people's identities through estate negotiation or jurisdictional arbitrage.

That would be a partial fix, not a resolution. The underlying question - whether AI companies can use the professional identities of real people as product features without consent - remains unanswered in law. Courts have been cautious about broad rulings. Legislators have been slow to act. The industry continues to run the experiment at scale while the legal framework catches up.

The sloppelganger problem is, in the end, a specific instance of a general condition. The AI industry built itself on the assumption that publicly available human output - text, images, code, expertise, voice - is a resource to be mined. That assumption has driven extraordinary capability gains. It has also produced a systematic transfer of value from the people whose life's work provided the training material to the companies whose products extracted it.

Grammarly's Expert Review is not an aberration. It is the logical conclusion. You do not just train on the work. You sell the worker.

Julia Angwin, who has spent her career documenting how technology systems harm ordinary people, recognized that immediately. The class-action is her answer. The sloppelganger term is the internet's answer. The courts will give theirs in time.

Until then, somewhere right now, someone is asking Grammarly to review a cover letter. And Carl Sagan is telling them it needs more verve.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram