Grammarly Uses Journalists’ Identities Without Permission

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Grammarly rebranded to Superhuman and has been rolling out AI-powered writing tools. Its “Expert Review” feature, which launched in August, is now drawing scrutiny for how it uses real people’s names and professional identities without their knowledge.

The feature presents AI-generated writing suggestions as being “inspired by” subject matter experts — figures including Stephen King, Neil deGrasse Tyson, and Carl Sagan. According to the report, none of the journalists identified in the feature gave Superhuman permission to include them. Among those named: Nilay Patel, editor-in-chief of a major tech publication, along with editors David Pierce, Sean Hollister, and Tom Warren. The list extends further to include Mark Gurman, Jason Schreier, Kashmir Hill, Lauren Goode, Kaitlyn Tiffany, Wes Fenlon, Raymond Wong, Richard Leadbetter, Mark Spoonauer, Katharine Castle, and Kat Bailey, among others.

What the Company Says

Alex Gay, vice president of product and corporate marketing at Superhuman, addressed the situation directly: “The Expert Review agent doesn’t claim endorsement or direct participation from those experts; it provides suggestions inspired by works of experts and points users toward influential voices whose scholarship they can then explore more deeply.”

When asked whether the company considered notifying or requesting permission from those named, Gay said, “The experts in Expert Review appear because their published works are publicly available and widely cited.”

That explanation sits uneasily alongside several technical problems the report documents. The feature crashed frequently during testing. Source links embedded in suggestions led to spammy copies of legitimate websites, archived pages, or in some cases entirely unrelated content — not the actual work of the person credited. This raises a direct question about accuracy: if the sourcing is wrong, the “inspiration” behind a named expert’s suggestions may not originate from that expert at all.

A Presentation Problem

The format of the suggestions compounds the issue. In Google Docs, the AI-generated feedback appears visually similar to comments left by real collaborators — mimicking the experience of receiving edits directly from a named professional. Expert descriptions also contain inaccuracies, including outdated job titles, errors that would likely have been caught had the individuals been consulted.

The report also notes that these problems are not immediately visible. Users must click “see more” to expand a suggestion, then click a separate “source” button to check the underlying link — a multi-step process that most users are unlikely to perform routinely.

Additionally, as Wired reported earlier this week, some experts listed in the feature are recently deceased, adding a separate layer of concern about how the tool selects and represents the people it references.

Superhuman has not announced any changes to the feature or indicated whether the named individuals will be contacted or removed.

Photo by Roce Vu on Unsplash

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article