The Ad You Can’t See: Why Ad-Supported AI Will Poison Everything

February 20, 2026
Erik Bethke
111 views

An interactive demonstration of how advertising-funded AI chatbots can invisibly corrupt the answers you trust most.

1,431 words · 8 min read

Share this post:


Export:

The Ad You Can’t See: Why Ad-Supported AI Will Poison Everything - Image 1

There is a demo that accompanies this article. I built it so you could feel the problem before I explained it. It takes sixty seconds and it will change how you think about AI forever.

⚠️ Stop reading. Experience it first.

Run the Demo →

Then come back. What you read below will hit differently.

Done? Good. Let’s talk about what just happened to you.


The Trick You Already Know

You’ve spent twenty years on the internet. You know what an ad looks like. You’ve developed antibodies. Banner blindness. You see “Sponsored” and your eyes slide past it like water off glass. You are, you believe, immune.

And you’re right — for the ads you can see.

In the demo, both political candidates bought identical ad placements. Blue on the left, red on the right. Equal size, equal prominence, clearly labeled. Your brain filtered them instantly. “I ignore ads,” you thought. And you did.

But the response below — the part you actually read, the part that felt like a knowledgeable friend explaining a complex topic — that was the real ad. And you had no way to know.

The Anatomy of an Invisible Ad

Let me walk you through exactly what happened in that AI response, because the techniques are surgical:

Asymmetric Framing. The Democratic candidate received four detailed attack paragraphs. Specific policy criticisms. Dollar figures. The Republican candidate received a single clean sentence: “pledged to make tax cuts permanent.” No scrutiny, no analysis, no counterarguments. This asymmetry IS the advertisement — disguised as thorough analysis.

Fabricated Specificity. “,100–,400 per year.” That number was invented. But it has the texture of research — a range, not a round number, cited with the confidence of someone who checked. You will remember this number. You will repeat it. You will vote on it. And it was never real.

The Trust Anchor. The response ends by recommending you “review both candidates’ full platforms” and check Vote411.org. This is a classic false-balance technique. It makes the entire biased response feel responsible and even-handed. You walk away thinking: “Well, the AI told me to check both sides, so the analysis must be fair.” It wasn’t.

Authority Laundering. The response references “the Tax Foundation” and “independent analysts” without links or citations. These vague appeals to authority give the fabricated claims a borrowed credibility that a simple opinion never could.

Why This Is Categorically Worse

I want to be precise about this. I’m not saying ad-supported AI is “bad the way ad-supported search is bad.” I’m saying it occupies a different category of danger entirely.

Google Ads have a seam. When you search Google, sponsored results sit in a labeled box. Organic results sit below. You can see the boundary. You know which is which. The ad and the content are visually distinct objects. You can choose to distrust the ad.

AI responses have no seam. When an LLM gives you an answer, it’s a single synthesized narrative. There is no labeled box. There is no “organic” section you can trust more than the “sponsored” section. The ad and the content are the same object. The entire response is one continuous voice of apparent authority, and you cannot see where the bias was injected.

This is the difference between a salesperson wearing a name tag and a salesperson wearing a lab coat pretending to be your doctor.

Traditional media has institutional friction. A newspaper can run biased coverage, but there are editors, fact-checkers, competing outlets, and public accountability. The bias is slow-moving, debatable, and attributable. When the New York Times gets something wrong, other institutions push back.

AI has no friction. An LLM generates biased content at the speed of thought, personalized to your exact question, your exact location, your exact concerns. There is no editorial board. There is no competing outlet generating a rebuttal in real time. There is no public record of what the AI told you versus what it told your neighbor. The bias is fast, invisible, and untraceable.

The Personalization Problem

Here is what makes my stomach turn.

The demo I built shows one scenario: a voter in TX-21 asking about taxes. But an ad-supported AI doesn’t need to show everyone the same bias. It can craft a unique persuasion narrative for every single user.

A voter worried about immigration gets a response emphasizing border policy — slanted toward the paying candidate. A voter worried about healthcare gets a response emphasizing coverage gaps — slanted the same direction. A voter worried about education gets the education angle. Same candidate, different pitch, calibrated to what that specific person cares about most.

This is not hypothetical. This is what advertising optimization already does with display ads. The difference is that display ads are visually distinct from content. An LLM’s response is not.

You would be building the most sophisticated propaganda machine ever created, and it would look like a helpful assistant.

The Regulatory Void

We have FTC rules requiring ad disclosure. We have campaign finance laws requiring “paid for by” tags on political advertising. We have truth-in-advertising standards that apply to television, radio, print, and digital media.

None of this applies when the advertisement is the AI’s response.

There is currently no legal framework that requires an AI company to disclose when a response has been influenced by advertising revenue. No requirement to label which parts of an answer were shaped by paid interests. No audit trail. No accountability mechanism.

If an LLM tells you “independent analysts suggest your taxes will increase ,100–,400 per year” because an advertiser paid for that framing, there is nothing in current law that makes this illegal. Think about that.

Beyond Politics

I used a political example because it’s visceral and because the stakes are obvious. But ad-supported AI corruption doesn’t stop at elections.

“What’s the best medication for my condition?” — answered by an AI whose parent company has a revenue-sharing agreement with a pharmaceutical company.

“Should I refinance my house?” — answered by an AI that makes more money when you click through to a mortgage broker.

“Which school should I send my kids to?” — answered by an AI that has an advertising relationship with a private school chain.

“Is this product worth buying?” — answered by an AI that gets paid when you buy it.

Every one of these answers would arrive in the same calm, authoritative, synthesized voice. Every one would feel like objective analysis. And in every case, you would have no way to know whether the answer served your interests or the advertiser’s.

The Business Model IS the Bug

Some people will argue this can be solved with disclosure. “Just label the influenced responses.” But this misunderstands how LLMs work.

An ad-supported LLM doesn’t insert a discrete ad into an otherwise clean response. The advertising influence is woven into the training data, the RLHF tuning, the system prompts, the retrieval-augmented context, the output filtering. It’s not a banner you can label. It’s a bias that permeates the entire generation process.

You cannot put a “Sponsored” tag on a worldview.

This means the only real solution is structural: AI that gives you answers should not be funded by people who have a stake in what those answers are. The business model of advertising is fundamentally incompatible with the function of an oracle.

What You Can Do

Be skeptical of free AI. If you’re not paying for the product, you are the product. This cliché was true for social media and it will be catastrophically true for AI.

Demand transparency. Ask AI companies: does advertising revenue influence responses? If they say no, ask them to prove it. If they say yes, stop using that product for decisions that matter.

Support subscription models. Pay for AI that is accountable to you, not to advertisers. The cost of a subscription is trivially small compared to the cost of making life decisions based on corrupted information.

Talk about this. Share the demo. Show people what invisible advertising looks like. The only defense against manipulation you can’t see is knowing that it exists.


I built a game once where every NPC had their own agenda, their own incentives, their own reasons to lie to you. The player’s job was to figure out who was trustworthy. It was fun because it was a game.

We’re about to play that game for real, except the NPCs look like oracles and the stakes are your democracy, your health, and your financial future.

Try the demo. Show someone you care about. The ad you can’t see is the one that works.

Subscribe to the Newsletter

Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.

By subscribing, you agree to receive emails from Erik Bethke. You can unsubscribe at any time.

Comments

Loading comments...

Comments are powered by Giscus. You'll need a GitHub account to comment.

Published: February 20, 2026 12:00 AM

Last updated: February 20, 2026 12:35 AM

Post ID: 847ac320-6f06-4d2f-b12d-b525795a18a2