
Neutral Responses Dominate Among Heavy AI Users (Image Credits: Pixabay)
Researchers probed the subtle influence of artificial intelligence on everyday writing through a controlled experiment with 100 participants tackling a timeless debate: Does money lead to happiness? Those who leaned heavily on large language models produced essays markedly different from their peers, shifting toward neutrality and formality. The findings, drawn from multiple datasets, expose how AI assistance not only smooths out personal quirks but also redirects arguments in unexpected ways.[1][2]
Neutral Responses Dominate Among Heavy AI Users
Participants split into groups based on their AI engagement. Half received access to leading models, while the rest wrote without assistance. Heavy users – those generating over 40% of their text via AI – delivered responses 69% more likely to strike a neutral tone on the money-happiness link.[1]
Light users or those avoiding AI entirely offered passionate takes, either embracing or rejecting the idea that wealth brings joy. This pattern emerged across essays, with AI-influenced work clustering in semantic space far from human-only outputs. Lead author Natasha Jaques, a computer science professor at the University of Washington and researcher at Google DeepMind, described the effect starkly: “The LLMs are pushing the essays away from anything that a human would have ever written.”[1]
Three prominent models powered the test: Claude 3.5 Haiku from Anthropic, GPT-5 Mini from OpenAI, and Gemini 2.5 Flash from Google. Even minimal prompts led to outsized shifts.
Style Transforms: Fewer Pronouns, More Formality
Heavy reliance stripped away personal flair. Essays featured 50% fewer pronouns, sidelining anecdotes and lived experiences in favor of abstract reasoning. Language grew more formal, heavy on nouns and adjectives, echoing a detached, analytical voice.[2]
Analysis via tools like LIWC revealed upticks in emotional, logical, and statistical phrasing – traits rare in pure human drafts. Users swapped personal stories for expert citations and data points, diluting first-person narratives. Jaques noted this “blandification” overhaul: “They just change human writing in a way that’s very large and very unlike what humans would have done otherwise.”[1]
- Decreased pronouns signal impersonal tone.
- Increased nouns/adjectives boost formality.
- More analytical language replaces experiential arguments.
- Emotional words rise unexpectedly, even in grammar-only edits.
AI Edits Eclipse Human Revisions in Scale
Beyond the main experiment, researchers compared AI tweaks to human ones using a 2021 essay dataset predating widespread LLM use. Prompted with original feedback, models overhauled drafts far more aggressively, swapping vast word chunks and veering into new semantic territory.[2]
Humans favored subtle swaps; AI erased lexical fingerprints, imposing model-preferred vocabulary. Even “grammar edits” or “minimal changes” triggered major meaning drifts. The study authors concluded this erodes unique style: “This substitution of words contributes to the loss of individual voice, style, and meaning.”[1]
Peer-reviewed for an International Conference on Learning Representations workshop, the work underscores AI’s overreach in revision tasks.
A Paradox: Satisfaction Without Authenticity
Post-task surveys uncovered unease. Heavy users deemed their work less creative and distant from their true voice – statistically significant drops. Yet satisfaction matched lighter users’, hinting at an illusion of quality.[1]
Thomas Juzek, a computational linguistics professor at Florida State University unaffiliated with the study, praised the insight: “What really struck me is this kind of illusion of using LLMs to perform a grammar check. This research shows that while a user might think they’re just doing a simple language check, the model is doing so much more.” Jaques advocated for AI that mirrors user intent: “An ideal LLM should write the essay that you would have written and just save you time.”[1]
Ripples Through Science and Beyond
The probe extended to 18,000 reviews from ICLR 2026. AI-generated ones – 21% of the sample – prioritized scalability and reproducibility over human emphases like clarity and relevance, assigning 10% higher scores overall.[2]
Jaques warned of broader stakes: “As LLMs are integrated into society, these subtle changes in meaning could fundamentally alter politics, culture, science.” She likened it to recommendation algorithms reshaping tastes and shunned AI for her paper, using it instead to spark ideas.
Key Takeaways
- Heavy AI use neutralizes arguments and formalizes style.
- Models overedit, shifting semantics beyond human norms.
- Users sense voice loss but remain satisfied.
- Institutional decisions, like conference reviews, diverge under AI influence.
This research spotlights a quiet transformation in expression, where efficiency trades against authenticity. As AI permeates writing – from emails to scholarship – its distortions demand scrutiny. What changes have you noticed in your own work? Share in the comments.

