In brief
- New stylometric studies identify recurring patterns in AI prose, including predictable rhythm, uniform sentiment, and low lexical variety.
- A Washington Post analysis of 328,744 ChatGPT messages reveals heavy reliance on emojis, favorite words, and the cliché pivot of “Not just X, but Y.”
- Vocabulary tells evolve quickly, but structural habits such as symmetry, neatness, and negative parallelism persist across model generations.
Is everything written by AI these days? Is this article?
The proliferation of large language models has prompted a new, wary literacy: people can now read a paragraph and wonder who—or what—wrote it. That anxiety exists for good reason.
Recent studies continue to show that the ever-increasing flood of machine-generated prose differs from human writing in increasingly not-so-subtle ways, from specific word choice to easily identifiable structural tics. These patterns matter because they affect far more than school essays and research theses; they shape corporate communications, journalism, and interpersonal email in ways that can muddle trust or authenticity.
Researchers surveying stylometric detection techniques have found consistent, measurable patterns in lexical variety, clause structure, and function-word distributions—a statistical fingerprint that persists across tasks and prompts. While these tells are shrinking with every model generation—OpenAI just fixed its over reliance on em dashes, for instance—the difference between AI slop and stuff that’s human-written is still large enough to inform how readers and editors approach suspiciously polished text.
A recent Washington Post analysis of 328,744 ChatGPT messages reinforces this point with real-world data. It found that the model leans heavily on emojis, a narrow palette of favorite words, and everyone’s favorite tell, “negative parallelism: “It’s not X, it’s Y;” or “It’s less about X and more about Y.”
The Post also warned against overconfidence: none of these traits prove AI authorship; they only raise the probability. Still, when a piece of writing exhibits several of them, the signal gets harder to ignore.
Here are the five strongest signals that a text may have been machine-generated, each anchored in current research.
The 5 most common AI tells
-
Negative parallelism and oversimplified contrast
AI overuses the neat, dramatic hinge of “It’s not X, it’s Y,” and its cousin, “not just X, but Y.” These structures create the illusion of insight while supplying very little. Stylometric studies show that LLM outputs tend toward balanced, formulaic clause structures rather than the uneven, intuitive rhythms human writers use. In the Post’s dataset, variations of “not just X, but Y” alone appeared in roughly 6% of all July messages—an astonishing percentage for a single rhetorical tic.
-
Over-neat structure and conspicuously consistent rhythm
LLM-generated text often reads like it was written by someone who revises compulsively but never improvises. Paragraphs follow textbook patterns, transitions are frictionless, and the cadence is almost mathematically even, according to a recent analysis in Nature. Human writing—even careful writing—typically reflects digressions, interruptions, tonal shifts, and asymmetric pacing. Stylometric work comparing LLM outputs to human short stories finds that models exhibit far narrower variance in sentence length and syntactic shape.
-
Smoothed-out emotional tone and overly courteous hedging
AI tends to sound friendly in a way no adult actually sounds unless they work in HR or customer support. Phrases like “It’s understandable that…” or endings that gently summarize everything (“Ultimately…”) show up with unnatural regularity. Quantitative reviews of detection methods note that LLM-generated prose exhibits more uniform sentiment and fewer abrupt emotional modulations than human text.
-
Vague abstractions and evolving “safe” vocabulary
Models rely heavily on generic nouns—”ecosystem,” “framework,” “dynamic”—and verbs like “leverage,” “unlock,” or “navigate” when they run out of specifics. Studies consistently show lower lexical diversity and heavier nominalization in AI text. The Washington Post and Nature analyses also found that certain AI clichés aren’t static: the infamous “delve” has largely faded, replaced by new favorites like “core” and “modern.” This matters because vocabulary tells evolve quickly; structure is more reliable than any fixed word list.
-
Balanced clauses and conspicuously careful phrasing
LLMs love symmetry: “While X is true, Y is also important,” or “Whether you’re a beginner or an expert…” These structures feel safe because they avoid commitment. Stylometric studies show that AI text overuses certain function-word patterns and clause constructions at rates that differ sharply from human baselines. Humans tend to be either more abrupt or more discursive; machines aim for diplomatic balance every time.
By the way, most of this article was written by AI.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source: https://decrypt.co/348923/5-biggest-tells-something-written-ai


