Back to Blog

The Median User Problem: Why AI Was Trained to Sound Like Everyone

AI output feels generic because it was optimized to be generic. The problem isn't your prompting — it's RLHF, a training process that averages out every writing style.

AI WritingChatGPTClaudeResearch

The complaint about AI output is always the same: it's fine.

Helpful. Competent. Polite. You can't point to an error. But when you read the response—really read it—it doesn't feel like it was written for you. The career advice applies to someone in roughly your situation but not your actual situation. The email draft is correct but bloodless. The project update reads like a template, not like something you'd send.

So you blame your prompting. You take courses, try frameworks, learn techniques for asking better questions. And better prompting does help—the way turning the steering wheel helps when you're driving in the wrong direction. It improves the experience without addressing why the experience needed improving.

Here's the thing: the reason AI output feels generically fine is that it was optimized to be generically fine. That's not a side effect. That's the training objective.


How AI Learns to Be Average

Every major AI model—ChatGPT, Claude, Gemini—goes through a training process called RLHF: Reinforcement Learning from Human Feedback.

Here's how it works, simplified:

  1. The model generates multiple responses to the same prompt.
  2. Human raters compare the outputs and pick which one they prefer.
  3. The model learns to produce responses those raters would choose.

Sounds reasonable. The problem is who those raters are—and more importantly, who they aren't.

They aren't you. They aren't your colleague, your client, or your CEO. They're hired evaluators working through thousands of comparisons a day, each bringing their own preferences, biases, and fatigue to the task. When thousands of raters evaluate millions of outputs, the model learns what tends to win with generic evaluators.

Not responses calibrated to your expertise. Not responses tuned to your industry's norms. Not responses shaped by the relationship you have with the person reading them.

Responses calibrated to a hypothetical typical person asking a similar question. The statistical center. A composite of everyone's preferences and nobody's in particular.

The training papers from both Anthropic and OpenAI describe this process openly. It's not a secret. It's just not something most people think about when they're frustrated with AI output.


The Median User Doesn't Exist

Every time you use AI with default settings, you're getting an answer optimized for the median user—a person who doesn't actually exist.

This fictional average person writes at a mid-formal register. They prefer structured responses with clear headers. They like their paragraphs medium-length and their tone professional-but-approachable. They never use em-dashes aggressively, never start with a one-word sentence for impact, never deploy dark humor in a status update.

The median user is pleasant, competent, and completely forgettable. And that's exactly what AI output sounds like when you use it out of the box.

You've probably noticed the telltale signs. AI-generated text over-relies on certain words—"delve," "facilitate," "harness," "illuminate"—that appear with unusual frequency across every model. It hedges constantly: "it's important to note that," "generally speaking," "it's worth considering." It smooths every edge, qualifies every claim, and produces prose that's impossible to criticize but also impossible to connect with.

Your sentence rhythm? Flattened. Your punctuation patterns? Normalized. Your writing fingerprint? Averaged out of existence.

These aren't bugs. They're features of a system optimized to be inoffensive to the largest possible audience.


The Numbers Behind the Problem

This isn't just a feeling. The data confirms it. (For the full empirical analysis, see why AI writing sounds generic — the data behind the problem.)

  • 83% of consumers can now detect AI-generated content. People can tell. They might not know why it feels off, but they know it does. (Original research via content marketing studies)
  • Human-written content gets 5.44x more traffic than AI-generated content—yet businesses keep doubling down on automated creation.
  • Content with a distinctive style generates 3x more engagement than standardized messaging.
  • Companies with distinctive brand personalities see 20% higher customer retention compared to generic positioning.

And the problem is accelerating. Researchers call it model collapse: as AI-generated content floods the internet, new models train on that content instead of human-written text. Each generation becomes more uniform, more median, more generic. A 2024 Nature study found that models trained on predecessor-generated text show consistent decreases in lexical, syntactic, and semantic diversity with each generation.

The style gap isn't closing. It's widening. The science behind Style Profiles explains the academic research on why this happens — and what actually works to reverse it.


Why Prompting Alone Can't Fix a Training Problem

Better prompts help at the margins. But you're fighting the model's training with every interaction.

Think about it this way: RLHF baked certain preferences into the model's weights across billions of parameters. Your 200-word prompt is trying to override that conditioning. It's like shouting instructions at someone who's already been told—millions of times—to do something else.

The AI platforms know this. Over the past year, they've quietly built mechanisms for escaping the median:

Memory — ChatGPT and Claude can now remember details across conversations. Your industry, your role, your preferences.

Custom Instructions — Persistent guidelines that shape every response. "I'm a CFO. Keep things quantitative. Skip the disclaimers." (We wrote a complete guide to custom instructions if you want to push this approach as far as it goes.)

Style Controls — Adjusting tone, length, and formality at the system level.

Tools and integrations — Connecting AI to your actual workflows and data.

These levers work. But they have limits:

  • They require significant ongoing effort. You're teaching AI about yourself one correction at a time.
  • They capture what you tell the model, not what you don't realize about your own patterns.
  • They break down for creative work and occasional use—if you're not using AI daily, corrections don't compound.
  • Most critically, they address the symptom (generic output) without replacing the cause (the model's default baseline is the median user, not you).

As one practitioner put it: the people getting extraordinary results from AI aren't smarter or more technical. They're encoding corrections instead of repeating them. But that encoding takes time, consistency, and self-awareness about patterns most people can't articulate. (We explored why this self-articulation is so hard in a previous post.)


What the Median User Problem Actually Requires

If the problem is that AI was trained on everyone's preferences and nobody's in particular, the fix isn't to fight the model's defaults prompt by prompt.

The fix is to give the model a comprehensive, specific picture of you that overrides its defaults.

That's what a Writing Style Profile does. To understand the technical process, read about how style extraction works. This is what you paste into your custom instructions—extracted systematically from your actual writing instead of guessed manually. Instead of nudging AI away from its median with each interaction, you give it a comprehensive map of how you actually communicate:

  • Your baseline patterns. Sentence length, punctuation patterns, vocabulary preferences, active vs. passive voice tendencies. Not vague descriptions like "professional but friendly"—quantitative anchors the model can follow precisely.

  • Your context variations. How you shift when writing to leadership vs. your team vs. external clients. The AI doesn't have to guess—it has explicit rules for each register.

  • Your anti-patterns. The phrases you'd never use, the habits to avoid, the corporate clichés that make you cringe. Telling AI what not to do is often more powerful than telling it what to do.

  • Your signature moves. The em-dash habit. The one-liner opener. The way you end emails with a question instead of a statement. The things that make your writing recognizably yours—your writing fingerprint.

A Style Profile doesn't fight the model's training. It works with it—giving the AI explicit instructions that override the median user defaults with your actual patterns. The same RLHF process that makes AI responsive to human preferences makes it responsive to specific human preferences, when you provide them with enough structure and precision.


Measure Your Median User Gap

Theory is useful. Data is better.

Want to see how much of the median user is in your AI output right now? Try this:

  1. Open ChatGPT or Claude with default settings
  2. Ask it to write a short email you'd normally send—a project update, a meeting follow-up, anything routine
  3. Read the output and note every phrase you'd change before sending

Count the edits. That number is your median user gap—the distance between what AI thinks you sound like and what you actually sound like.

Most people find 5-10 edits in a single paragraph. That's not a prompting problem. That's a training problem.

Want a more precise measurement? Check your Humanity Score — paste any text and get a detailed breakdown of how much "median user" is showing through. It takes 30 seconds and scores your writing across multiple dimensions.

Or go deeper over five days: Join the free email course — one lesson a day on escaping the median, from context-switching to building your writing fingerprint.


The Window Is Closing

Here's the urgency: as model collapse accelerates and AI-generated content becomes the new training data, the baseline keeps drifting further from authentic human writing. Merriam-Webster chose "slop" as their 2025 word of the year. CNN is predicting 2026 as the year of "100% human" marketing.

Distinctive writing style is becoming a competitive moat. The professionals and companies that document, protect, and deploy their authentic communication style through AI will stand out. Everyone else will sound like the same median user—polished, competent, and interchangeable.

The question isn't whether you should personalize your AI. It's whether you do it systematically or spend the next year encoding corrections one prompt at a time.



Get Your Free Writing DNA Snapshot

Curious about your unique writing style? Try our free Writing DNA Snapshot — it's free and no credit card is required. See how AI can learn to write exactly like you with My Writing Twin.