Writing assistant Grammarly has scrapped a controversial artificial intelligence feature that mimicked the style and authority of real‑world writers and experts, after a fierce backlash and a class‑action lawsuit in the United States. The company has apologised and pledged to rethink the tool, which critics say crossed a line from AI assistance into commercial exploitation of people’s names and reputations.
The now‑withdrawn AI feature, called Expert Review, was promoted as a way for users to receive higher‑level feedback on their work, not just basic grammar and spell‑check suggestions. Instead of a generic assistant, it presented comments and edits framed as coming “from the perspective” of named authors, journalists and subject‑matter experts.
Product descriptions said the feature would offer “subject‑matter expertise and personalised, topic‑specific feedback” designed to meet “rigorous academic or professional standards.” Behind the scenes, the system analysed a user’s text, matched it to certain fields, and then generated AI feedback associated with particular real‑world figures in that area.
Grammarly and its parent company maintained that the feedback was only “inspired by” widely available work and that no one’s actual writing was reproduced word‑for‑word. However, the interface and language used in the product led many to feel that their professional judgment and “voice” had been turned into a commercial AI persona without their consent.
The controversy intensified when authors and journalists realised their names were being used inside the product without prior agreement. Investigative journalist Julia Angwin, founder of non‑profit newsroom The Markup, discovered Grammarly was offering editing suggestions that appeared to channel her voice and experience.
Angwin later became a lead plaintiff in a class‑action lawsuit and issued a sharply worded public statement. “I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard‑earned expertise,” she said, capturing a sentiment that resonated widely among writers.
Other journalists expressed similar anger at seeing their identities effectively turned into product features. Technology reporter Casey Newton criticised the decision to monetise people’s names in this way, saying the company had “built a list of real people, trained their models to generate plausible advice on our behalf, and put it behind a subscription,” calling it “a deliberate choice to monetize the identities of people without them, and it sucks.”
Gaming journalist Wes Fenlon, who also appeared in the expert list, took issue with the company’s initial response that experts could email to opt out. He described that approach as “laughably” inadequate, arguing that true consent should be explicit and sought ahead of time, rather than offered as a patch once a controversial product is already live.
Reports also indicated that high‑profile authors such as Stephen King were among the names users could see referenced by the feature. For many in the creative community, this reinforced the sense that long‑established reputations and distinctive styles were being leveraged to sell an AI product with little transparency or control for those involved.
The backlash quickly moved from social media to the courts. A class‑action lawsuit filed in the US accuses Grammarly’s parent company of using writers’ names and professional personas as a commercial asset without permission.
The complaint argues that attaching specific identities to AI‑generated commentary particularly within a paid service goes beyond generic training on publicly available text. Instead, it frames the feature as an unauthorised endorsement that may infringe publicity rights, mislead consumers and undermine the professional standing of the writers involved.
The suit claims that the value of the case exceeds 5 million dollars, reflecting alleged economic harm and reputational damage. It also warns that a precedent allowing products to present AI advice as if it came from real people could reshape expectations around consent and compensation in publishing, journalism and other creative fields.
Julia Angwin’s lawyer, civil‑rights attorney Peter Romer‑Friedman, has said the response from other writers was immediate. According to his public comments, many authors and journalists reached out after the case was filed, suggesting a broader anxiety about AI systems that appropriate the value of a person’s name and expertise without clear agreements.
Initially, Grammarly and its parent firm suggested that named experts could request removal from the system. But that limited opt‑out offer did little to ease concerns as criticism and legal pressure intensified.
Superhuman CEO Shishir Mehrotra later issued a more detailed statement acknowledging that the company had misjudged the impact of the feature. “Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices,” he said, adding: “This kind of scrutiny improves our products, and we take it seriously. I want to apologise and acknowledge that we’ll rethink our approach going forward.”
Mehrotra confirmed that Grammarly would disable Expert Review while it “reimagines” the concept. Any future version, he said, would be designed to “make it more useful for users, while giving experts real control over how they want to be represented or not represented at all.”
The company has also promised to create a formal process for experts to decline participation in such features. In broader public statements, Grammarly has reiterated that it aims to “build AI responsibly and respect the concerns of writers and creators,” signalling that the episode is likely to influence its next generation of AI tools.
Technology firms have for years relied on huge collections of books, articles and online content to train AI models, often arguing that this falls under existing copyright or fair‑use rules. Many authors and artists already object to that practice, but Grammarly’s Expert Review raised an additional set of concerns.
Unlike a generic AI assistant that draws on large training datasets but speaks only under the company’s own brand, Expert Review presented AI suggestions explicitly “from the perspective” of named individuals. For critics, this crossed a line towards impersonation: a system that looked and sounded as if an identifiable writer or journalist was personally endorsing a particular edit or style choice.
Some observers compared the feature to an AI “stand‑in” that could say things a person would never actually endorse, yet still trade on their identity. Others accused it of effectively copying or “plagiarising” the hard‑won voices of prominent writers, compressing years of craft and reputation into a convenience feature available on demand.
This also touches on a larger trend in AI design, where companies are increasingly tempted to frame their products around recognisable faces or names. As AI agents become more personalised and more closely associated with individuals, the legal and ethical distinctions between inspiration, imitation and impersonation are likely to be tested more frequently.
Grammarly, founded in 2009 as a grammar and spell‑checking tool, has spent the last few years evolving into a comprehensive writing assistant with drafting, rewriting and tone‑adjustment features powered by generative AI. Turning off Expert Review does not change those broader capabilities, but it does mark a significant retreat on how far the company is willing to go in tying AI output to real‑world identities.
If Grammarly attempts to revive the idea in future, industry watchers expect that any such feature will have to be genuinely opt‑in, with clear agreements on how an expert’s name and style can be used, and what rights they have over their AI‑generated “voice”. That would make the system look more like traditional endorsements or partnerships, rather than a silent appropriation of reputations.
Beyond Grammarly, the dispute feeds into a wider reckoning in creative industries, where authors, artists, actors and musicians are challenging AI tools that can reproduce or mimic their work and public personas. As regulators and courts begin to address cases like this one, their decisions will help define how far companies can go in turning real people into AI‑driven features and who gets to share in the value that creates.
For everyday users, the removal of Expert Review may not dramatically change how they use Grammarly in the short term, since its core grammar and writing tools remain in place. But the backlash has pushed a deeper question into the open: in an era when AI can convincingly simulate human voices and styles, who gets to decide when a name becomes a feature and at what point imitation becomes impersonation.
Comments