AI is being trained to replicate great marketing by people nobody would describe as great marketers. Here’s why that’s a problem the entire industry is sleepwalking into.
I recently came across a company called Mercor, and for once my reaction to an AI story wasn’t excitement, panic, or the mild existential dread I’ve started taking with my morning coffee. It was anger.
If you haven’t heard of it, Mercor is a marketplace that pays “experts” to train AI to replicate professional work — law, finance, consulting and, yes, marketing. Humans review outputs, define what’s good and bad, create problem sets and feed judgment back into the machine so it improves. It’s a company built by founders in their twenties who haven’t actually done any of the jobs they’re now trying to replicate. Or any job, frankly.
Silicon Valley has been confidently building things it doesn’t understand since before most of its founders were born. That’s practically the business model.
What stopped me wasn’t the audacity. It was the question hiding inside it.
The Question Nobody’s Actually Asking
The job descriptions sound credible enough. Experts analyze branding, consumer behavior, marketing performance. They evaluate AI outputs, provide structured feedback. The listed qualifications look solid — MBA, PhD, five-plus years in digital or growth marketing.
And then it hit me. Who the f#%k are these experts?
Not as an insult. As a genuinely serious question. Because if AI is being trained to think like marketers, someone is deciding what “thinking like a marketer” actually means. And that decision will shape more work than any creative brief ever written.
To be fair, this isn’t just Mercor. LinkedIn is already testing similar AI training marketplaces. An entire category is forming around “human-in-the-loop” systems — pay people to refine outputs, inject judgment, teach the machine what good looks like. Mercor claims millions of vetted experts and millions paid out daily. LinkedIn is reportedly offering up to $150 an hour for the privilege.
We’ve stopped experimenting with how AI learns. We’re industrializing it.
Competence Is Not Brilliance. And We’re Confusing the Two.
Here’s the problem. Marketing doesn’t work the way these systems assume.
The qualifications Mercor uses as entry criteria — MBAs, PhDs, five years of experience — aren’t signals of greatness in this business. They’re signals of competence. And competence and brilliance are not the same thing, however loudly we pretend otherwise on LinkedIn.
Marketing is one of the few professions where the real value isn’t credentialled. There’s no licensing body for taste. No exam for cultural instinct. No certification for knowing when something is technically correct and completely wrong. (If there were, half the ads running right now would fail it.)
So, we default to what we can measure. Degrees. Job titles. Years in the industry. All fine proxies for showing up. Terrible proxies for being brilliant. And now we’re using those proxies to train the machine.
The Numbers Tell a Story. It’s Not a Flattering One.
McKinsey’s latest data puts AI adoption at 88% of organizations using it in at least one business function — up from 78% just a year earlier. McKinsey & Company is consistently among the most active areas. Generative AI use has surged even faster, now deployed regularly by 79% of organizations. This is not a niche experiment on the fringes. It’s already shaping output at scale.
The IAB found that 83% of ad executives say their company has now deployed AI in the creative process, up from 60% just two years ago. And — here’s the bit that should make you put down your coffee — 82% of those executives believe Gen Z and Millennial consumers feel positively about AI-generated ads. Only 45% of consumers actually do.
It gets worse. That gap between advertiser perception and consumer sentiment has actually widened — from 32 points in 2024 to 37 points now.
The people building and training these systems are already misreading the audience they’re trying to influence. Now they’re encoding that misunderstanding into the models themselves. We are, with great efficiency and considerable funding, teaching AI to be confidently wrong.
Among Gen Z, the backlash runs even deeper: 30% describe brands that use AI for ads as “inauthentic,” 26% say “disconnected,” and 24% say “unethical.” These aren’t fringe opinions. They’re your next generation of customers telling you exactly how they feel — and the industry is moving in the opposite direction.
Smartly’s research confirmed that only 13% of consumers trust ads created entirely by AI, while 48% trust ads co-created by a person with AI support. People can feel the absence of judgment, even when they can’t articulate it. That instinct has a name. We used to call it taste.
So, if we can’t agree on what good looks like now, what exactly are we teaching the machine? The answer — and I say this with affection for the industry I’ve spent my career in — is the average. AI doesn’t invent mediocrity. It scales it.

What “Good” Actually Looks Like
You can see this clearly in the work. When AI is used well, it’s almost never the source of the idea. It’s the amplifier.
Heinz used AI to generate ketchup imagery, and the outputs consistently looked like Heinz. The technology proved a brand truth that already existed. Cadbury used AI to let small businesses create ads featuring Shah Rukh Khan (India’s biggest Bollywood star and the face of Cadbury for decades), scaling a strong human idea across thousands of executions. Virgin Voyages built a personalized AI invitation system around Jennifer Lopez — concept first, execution second.
When AI replaces judgment instead of supporting it, the cracks appear fast. Coca-Cola’s AI-driven holiday creative was technically polished and emotionally hollow. Mango, the Spanish fast-fashion retailer, rolled out AI campaigns across dozens of markets that were efficient, scalable, and utterly indistinguishable from everything else in the category.
AI doesn’t kill creativity. It exposes whether there was any to begin with. Which, for a significant portion of the industry, is uncomfortable news.

The Availability Problem
Now go back to Mercor. Tens of thousands of contractors. Millions paid out daily. A global network of experts feeding judgment into machines.
Here’s the uncomfortable reality. The best marketers aren’t doing this work. They’re not sitting at home grading AI responses for $100 an hour. They’re running brands, making decisions, killing bad ideas, and taking the kinds of risks that machines — trained on consensus — would never recommend.
So, who is doing it? People who are smart, capable and — crucially — available.
Availability is not authority. But in a system like this, it becomes the selection criteria. The machine isn’t being trained on excellence. It’s being trained on whoever showed up. That’s how you get mediocrity as a dataset.
The Holding Groups Figured This Out
The large holding groups already understand this risk — and they’re moving accordingly.
WPP’s Agent Hub, built into its WPP Open platform, codifies roughly 30 years of proprietary Brand Asset Valuator data — the world’s largest and longest-running study of brand equity — alongside behavioral science frameworks and what WPP calls a “Creative Brain” drawing on 150 years of accumulated creative intelligence. They’re not crowdsourcing judgment. They’re working to protect it.
Omnicom, WPP and Havas each unveiled AI operating systems at CES 2026, all converging on the same idea: agencies as managed ecosystems of AI agents, built on proprietary data, wrapped in compliance, plugged into end-to-end marketing execution.
The common thread isn’t technology. It’s whose knowledge is doing the training. Because once you dilute standards at this scale, you don’t quietly get them back.
The Wrong Question Is the One Everyone Keeps Asking
Everyone wants to know whether AI will replace marketers.
Wrong question.
The right question is: who is teaching AI what marketing is?
If the answer is “a large pool of reasonably qualified, currently available people,” we’re not building better marketing. We’re building faster average. And average, at scale, is a very expensive way to disappear.
Where does the judgment in your organization live — and are you protecting it? I’d genuinely like to hear what you think.
Sources: McKinsey & Company, Interactive Advertising Bureau (IAB), IAB / Sonata Insights, Smartly, WPP, Storyboard18