Why a podcaster gets paid more for chatting with a scientist than the scientist gets paid for the breakthrough. The economy isn't broken — it's reverting to the natural order. To be human is to be a storyteller. The 20th-century specialist bubble is the anomaly, not the rule.
Share this post:
Export:

I love podcasts. Early Lex. Dwarkesh. Dan Carlin. I respect the form.
But here's the beef I've been chewing on for years: Joe Rogan probably pulls nine figures over the life of his Spotify deal. Dwarkesh probably clears a few million a year. The scientists, founders, and engineers they interview — the people who actually make the thing — almost none of them out-earn the podcaster. And the podcaster's cut sits on top of Spotify's cut, which sits on top of Apple's cut, which sits on top of every gatekeeper from the App Store to the publishing house.
Layers of extraction. The distributor wins. The creator gets squeezed. Plastic lips, Botox, vapid tribal politics, Bill Nye worth more than every working PhD I can name — same broken fitness function. We optimize for visibility, not contribution.
That's where I started a long bathtub conversation with Claude a few nights ago. That's emphatically not where I ended up.
I tried to redesign the human status function. Reward compounding impact. Reward citations. Reward downstream breakthroughs and durable work. Punish virality. Reward the researcher; demote the influencer.
The redesign collapsed under the obvious counterexample: Rogan didn't fade after eighteen months. He compounded. So why does packaging information keep out-scaling creating it, in every era, in every medium?
Because attention is scarce and creation is abundant. One person can only do so much research; one person's platform can reach millions. Aggregation beats origination on the math, every time.
This is also the cleanest read on what a CEO actually does. We talk about CEO comp like it's about decisions, vision, or charisma. The underlying mechanic is simpler: a CEO captures the cognitive output of everyone in the company. A thousand engineers, designers, sellers, and operators waking up aligned on the same thing isn't a thousand units of cognition — it's aligned cognition, which compounds. The same thousand people waking up working against each other isn't a thousand units of anything; it's a smoking crater. So the CEO's real job is cognitive alignment of the organization — picking the narrative the company will inhabit and making it real enough that every employee can rehydrate it into their own daily decisions without checking in with anyone. That's a story job. The CEO is the chief storyteller — executive producer, director, and lead actor of a private broadcast network.
When the CEO is good, every brain in the building is amplifying the same world model and the company compounds. When the CEO is bad, the company fails — not slowly, not for vague reasons, but because cognition gets misaligned and the whole engine starts working against itself. The comp asymmetry isn't absurd; it's the market pricing the leverage. A great CEO's leverage isn't 1× their own cognition — it's a multiplier on the entire org's. A bad CEO is worse than absent, because they actively scramble the alignment that thousands of people would otherwise self-organize toward.
The same dynamic plays out one tier up. Daniel Newman captures the upside on my work at the Futurum Group because his megaphone touches the customer first; the rigor I add inside the model is downstream of the channel he owns. CEO does it inside the firm. Distribution-owner does it across firms. Same mechanic, different scope.
So I tried hating gatekeepers harder. And then it cracked open the other way:
Maybe story is the fundamental cognitive artifact of homo sapiens.
Not entertainment. Not communication. The fundamental cognitive artifact.
A story is world-model transmission. I have a mental model of how some chunk of reality works. I compress it lossily, package it for your nervous system, and you rehydrate it into your own mental model. That's the trick. That's the whole game.
Storytellers aren't extractors. They're cognitive banks. They hold compressed world models and lend them out at interest — you trade attention or deference for their translation of complexity. Like real banks, they can be efficient stewards or extractive parasites. But the function itself is essential.
Because thinking is expensive. First-principles reasoning means gathering your own data, building your own tools, running your own experiments, and proving every claim to yourself. Civilization can't run that way. It would be a million people each independently reboiling the ocean. Shamans, priests, kings, bards, CEOs, podcasters — they're all filling the same role: curating and compacting reality so groups can coordinate without everyone being an expert.
This isn't corruption. It's necessary compression. And — this is the part I missed for years — it's how it has always worked.
Here's the reframe that finally let me sleep on it.
The 20th century was an anomaly, not the rule. Industrial Revolution, two world wars, the petroleum age, the green revolution, antibiotics, the transistor — every one of these created a sudden, massive demand for narrow specialists faster than the pipeline of humans could produce them. Physicists, chemists, surgeons, structural engineers, software developers. The scarcity of specific expertise temporarily eclipsed the scarcity of narrative transmission.
For about a hundred years, the economy paid specialists more than storytellers. Doctors out-earned preachers. Engineers out-earned novelists. Scientists got Nobel money and TED talks and tenure. We told ourselves a story about meritocracy that was actually a story about which species of merit happened to be scarcest in a particular industrial moment.
That moment is ending.
Specialist work is becoming abundant. AI is doing it now. Junior code, basic legal research, radiology, copywriting, large chunks of accounting, first-pass medical diagnosis — all of it is collapsing in price toward zero. The expert isn't obsolete, but expertise as a moat is.
Storytelling remains scarce. The skill of compressing a world model into something another mind can carry — that doesn't collapse. If anything, it's the meta-skill that runs over the top of the now-abundant specialist work. Someone has to decide what gets built, what it means, and how it lands. Someone has to make the decisions actionable for people who don't want to read the whole spreadsheet.
So the economy isn't getting more absurd. It's reverting to the natural order.
And before that hits the obvious objection — "a shaman and a CEO aren't storytellers, they're leaders" — let me name the thing directly. Every one of those roles literally is a storyteller. That is the main function. The titles are just domain-specific costume.
The shaman tells the tribe why the hunt failed, why illness came, why the seasons turn. The priest tells you who God is, what God wants, and what your life means inside that story. The king is the human protagonist of a national story — the embodied narrative around which loyalty, taxation, and war get organized; strip the story off and "king" is just one more guy with a sword. The emperor is the same role at higher resolution — the us-versus-everyone-else narrative that welds disparate provinces into one coordinated mass. The general tells the story of why the men should die today rather than run. The judge tells the story of which side of the law the facts of the case land on. The CEO, as we just covered, is the chief storyteller of a company — the person whose narrative every employee runs as background process. The founder, the pop star, the novelist, the podcaster — same job, different costume, different audience size.
One ancient profession. Many uniforms. The person who tells the compressed world model that other people then act inside. That's it. We dress it up with domain-specific titles, but the underlying role is identical across forty thousand years.
For every century before the 20th, that role was the highest-paid one in the room. The 20th-century scientist-and-engineer ascendancy was a beautiful interruption. The 21st century is the swing back.
If story is the most efficient structure for compressing a model and making it viral with another mind, then story may have been the selective pressure that produced human cognition — not the byproduct of it. Music probably predates symbolic language; rhythm and pattern-response show up in animals, plants, even mycelium. Bigger brains, longer childhoods, social bonding — all downstream of whoever can tell the better story coordinates better and survives.
We didn't invent storytelling. Storytelling invented us.
I went looking for a thinker who makes this claim at full strength and didn't find one — but I did find people who got close, in pieces. Michelle Scalise Sugiyama's "The storytelling arms race" (2019) gets closest on the evolutionary-causation pillar: she argues outright that storytelling was a selective pressure on human intelligence, framed as an arms race between truthful and deceptive narrative. Robert Shiller owns the markets pillar with narrative economics. Janus's "Simulators" and Murray Shanahan own the LLMs-as-narrative-completion-engine pillar. Joseph Campbell and Yuval Harari own pieces of the cultural pillar. The cleanest enemy of the strong form is Steven Pinker, for whom language is a biological instinct selected for general communication efficiency, and storytelling is a use of that faculty, not its cause.
But none of them, as far as I can tell, assemble it into a single stack — story as the substrate, humans as the runtime, with the same forty-thousand-year-old technology running underneath myth, religion, markets, CEO comp, and modern AI. Sugiyama owns the strong evolutionary claim; Shiller owns the markets piece; Janus owns the LLM piece. Nobody is wiring them together. That wiring is what this post is. If I've missed someone who already did, I want the citation. Story is the substrate; humans are the runtime.
To be human is to be a storyteller. To be a storyteller is to be human. Everything else — language, math, science, engineering, money, politics, religion, war — runs on top of that one operation.
A story is a lossy-compressed model of causal relationships within a domain, optimized for transmissibility and rehydration into actionable understanding.
Two virtues, separable:
The best stories do both. A simple story that works beats a beautiful theory that fails.
Math, science, and engineering all live inside this frame. The Pythagorean theorem is a story — highly formalized, stripped of unnecessary detail, but fundamentally a narrative about how things relate. In right triangles, this causal relationship always holds. It's just compressed to its essential skeleton, which is why math is so powerful: it's storytelling optimized for maximum compressibility and zero ambiguity. Engineering is applied story. Science is the process of testing stories against the world and refining them. The scientific paper is the transmissible artifact; the real work product is the compressed causal model.
What's not story? Raw sensation. Pain. Grief in the moment. The taste of bathwater. Those are pre-narrative — the data that feeds story-generation. Everything else is story all the way down.
GameStop should have forced a reckoning. A coordinated narrative moved billions of dollars against the fundamentals. Economists shrugged it off as an anomaly. They're still defending the rational-actor model.
But GameStop was the cute anomaly. The structural one is bigger, quieter, and already won: passive index investing.
The story is four decades old at this point and it compresses beautifully — "index funds beat 86% of actively managed funds, cost almost nothing, you never have to pick a stock again." Bogle told that story. Vanguard, BlackRock, and a generation of personal-finance writers re-told it. It hydrated into a global retirement-saving habit. And then it just... kept compounding.
Today, roughly half of all US equity market cap is held by passive-mandate vehicles — index funds, target-date funds, 401(k) defaults, robo-advisors. In 2000 that number was around 3%. Those dollars are, by mandate, not price-sensitive. They don't care what Apple is worth; they're required to hold Apple at its current cap-weight. Because the top seven names are ~30% of the index, ~15% of every S&P 500 dollar flowing through the system is forced, non-price-seeking buying of seven companies — a share that grows every year another tranche of paychecks gets auto-deducted into a 401(k) default.
Read that twice. A story about how to invest is now allocating more capital into the top of the cap distribution than fundamental analysis of the underlying businesses is. Every fundamentals shop on Wall Street, combined, is being out-leveraged by a narrative that fits on a Post-it: "just buy the index, never sell."
The better storyteller wasn't a person. It was an idea — Bogle's — packaged so cleanly that it ate the meta-game. The hand-crafted, clumsy, individually-curated stories of fundamental analysts now move less of the world's capital than the single sentence "passively investing in the top index beats almost everyone." That's the largest economic structure humans have ever built, and it is being moved primarily by narrative coherence and habit, not by anyone's model of what the underlying companies are worth.
The math, the backtests, and what to actually do about it are in The Top Three and the Tundra. The point for this post is just that it happened. Story won. At the scale of $50 trillion.
A note on which stories win: not every narrative wins at this scale. Most don't. Bogle's compression survived four decades and ate the meta-game because the underlying empirical claim was true — the SPIVA scorecard has shown for two decades that the majority of active managers underperform their benchmark, and the gap widens with the time window. That tracks Scalise Sugiyama's 2019 framing of storytelling as an arms race between truthful and deceptive narrative: at compounding scale, truthful compressions outcompete deceptive ones because reality keeps falsifying the deceptive ones and only the survivors get retold. The dot-com narratives died. "Just buy the index, never sell" did not. So the strong form of this thesis isn't that story has displaced fact — it's that story is the package that lets compounding fact propagate. Story won because in this case the story was true, and reality let it scale.
Watch what Elon Musk is doing right now. SpaceX filed its S-1 in April 2026 for a June Nasdaq listing at a $1.75 trillion valuation and a $75 billion raise — the largest IPO in human history, by an order of magnitude. (Saudi Aramco's record was $29B.) Days before the filing, Nasdaq rewrote its index rules: the seasoning period before a new listing joins the Nasdaq-100 dropped from three months to fifteen trading days, and the 10% public-float requirement was eliminated outright. Both changes were timed to take effect May 1, 2026 — just in time. The reporting is unambiguous that the rules changed because SpaceX's team made clear it expected immediate index inclusion.
Translation: Musk isn't waiting for fundamental analysts to bless his price. He is positioning SpaceX to receive multi-billion-dollar mandatory passive flows fifteen trading days after the opening bell, regardless of whether $1.75T is sane. The passive-investing story does the buying. He just has to be eligible. The exchange rules — the soft constraint on who gets to receive the forced flow — got rewritten on his timeline.
And the dual-class share structure ensures the people whose retirement accounts are buying him never get a vote. Musk's Class B shares carry ten votes each: he keeps roughly 42% of the equity but ~79% of the voting power, only Class B holders can remove him as CEO, and he is the controlling Class B holder. Only Musk can fire Musk. The public gets cap-weighted exposure. Musk gets cap-weighted cash and keeps the keys.
This is what it looks like to read the meta-game cleanly: see that the truthful story of passive investing has become the largest forced-buying mechanism in human capital allocation, see that exchange rules are a soft constraint on who receives that flow, and walk a private company through the door at peak valuation while keeping total control. Musk isn't telling a true story. He's parasitic on a true one. That's the limit of the arms-race framing — even truthful narratives build infrastructure that deceptive players can hijack. Story-aware capital allocation. The price tag is in the trillions because he understood the leverage exists because someone else, decades ago, told the truth.
Markets aren't moved by fundamentals or by technicals. They're moved by whichever story coordinates the most capital. Fundamental analysis is one narrative ("the numbers tell the story"). Technical analysis is another ("the patterns tell the story"). Passive indexing is a third ("don't bother — just hold everything"). Whichever one recruits the most believers in the room wins.
This is why a hundred-thousand-dollar semiconductor model is worth a hundred grand. It's not a model. It's a narrative about how the market unfolds, structured as data. Customers aren't buying numerical accuracy — they're buying a coherent world model that tells them where to allocate. Fundamental and technical analysis are storytelling in different costumes.
The corollary, which I think is the actual product opportunity: if I can pre-walk the decision tree of possible futures and tell my customer "if you believe memory becomes the bottleneck, turn to page 175; if you believe China takes Taiwan and we lose a fab cycle, turn to page 18" — I'm not selling a model. I'm selling a choose-your-own-adventure through possible futures, with the implications already digested. That's worth way more than a spreadsheet, because I've collapsed the gap between understanding and action. The story is the product.
The hottest investment thesis on Earth right now is AI. The market knows it's important. The market can't articulate why it's important. Here's why:
Modern AI — transformers, diffusion models — is literally inferring the story to be told. It is not a search engine. It is not a knowledge base. It is a narrative-completion engine pattern-matching across every storyteller who ever wrote anything down. It's storytelling at the most advanced level humans have ever built. If story is the fundamental cognitive technology of homo sapiens, then advanced AI is the apotheosis of that technology.
This is why the panic about hallucinations misses the point — but the right way to say it is more careful than the slogan version, because the slogan version is wrong. Humans absolutely care about truth where it bites back fast and personally: bridges holding, antibiotics working, GPS routing through fog. We are ruthless empiricists in those domains and reality polices us into it. What humans actually do is select stories on rehydration usefulness, where usefulness is dominated by tribal/social payoff in every domain except where reality punishes inaccuracy fast enough and personally enough to override the story. The map that gets you home is a good map because the territory has constraints the map respects — and where the territory's constraints are slow or socially mediated, the map doesn't need to respect them on the timescales people live by.
Trump II is what the slow-feedback case looks like at scale. A population reading post-truth stories to itself in exactly the domains where the story pays better socially than the facts pay individually — vaccines at population scale, constitutional erosion, climate, institutional trust — while the same voters who reject epidemiological consensus still want the bridge on their commute to hold and the antibiotic to clear the infection. They didn't become epistemic nihilists about everything; they became selectively post-truth in the slow-feedback zones. That's a more damning observation about Homo sapiens than "humans don't care about truth," not less.
This makes AI's position worse, not better. LLMs operate primarily in the slow-feedback zone — meaning, identity, political reality, the long story of who we are. The hallucination panic catches AI errors in the fast-feedback domains where reality polices them anyway; the actual risk is that AI-mediated narrative eats ontology in the domains where reality punishes too slowly to correct. Same engine that let Trump II eat the Republican party's stated principles, bigger scale.
(Madison Avenue made this trade in the 1950s. Algorithmic feeds made it again in the 2010s. AI didn't introduce the problem; it accelerated a threshold we crossed two generations ago.)
The dangerous part isn't hallucination. It's convergence. LLMs are trained to generate plausible, engaging stories — not diverse ones. The more humans depend on AI storytellers, the narrower the underlying ontology becomes, even as the surface variety feels endless. A thousand stories, all silently agreeing on what's possible, what's true, what matters. That's the real risk: not that AI lies, but that it converges everything toward plausibility in exactly the domains where reality won't correct it back. (That's a separate post.)
I have a friend, Joel — brilliant engineer, infuriating thinker. Ask him a yes/no question and he enumerates every possible edge case, every adjacent permutation, with no cutoff.
Here's the part I want to be fair about: from inside Joel's cognitive frame, this is the most honorable thing he can possibly do. Compressing reality means losing fidelity. Picking three scenarios out of seventeen means lying by omission about the other fourteen. To Joel, the storyteller move — "here are the three that matter, here's my recommendation" — is dishonesty dressed up as helpfulness. He's holding the line on completeness. In his world model, that's integrity. He is not being lazy or evasive; he is being maximally rigorous on a fitness function that values truthfulness over usability.
And yet the people on the receiving end of Joel are universally exhausted, because what they actually want, often desperately, is "dude, please just do the thinking for me." They are explicitly asking for compression. They are consenting, eyes open, to lose some fidelity in exchange for getting a usable answer back today rather than next quarter. They want the lossy story. Joel keeps offering them the unbounded truth. The conversation can't close because one side won't do the compression and the other can't go any further without it.
That's this whole essay in miniature. Humans aren't demanding capital-T Truth. They're demanding compressed, transmissible, actionable world models — even when the cost is that some of the truth gets sanded off in the process. Joel's position — I would rather be complete than be heard — is genuinely defensible on its own terms. It just leaves him doing the storyteller's job badly while believing he's doing it best.
The mirror failure is the other extreme: charisma politicians, religion-as-comfort, the celebrity who says "don't worry, sweetheart, I already thought through it" — collapsing complexity all the way to zero so you don't have to think at all. The first hands you raw possibility space and abandons you. The second hands you a false synthesis and exploits you. Good storytelling sits in the middle: the three scenarios that actually matter, the implications, you choose.
Freshman year at USC. Course called Change in the Future. Required reading on the shoemakers of Lynn, Massachusetts, and the glassblowers of France — artisans displaced by technology. The assignment: write about it.
I lived it. My Hungarian immigrant stepfather and I installed glass and mirror in the homes of Oprah Winfrey and Tom Selleck. Took our shoes off at the door while carrying hundreds of pounds of glass that could break and slice an arm off. Spring break, summer break, blue-collar work under threat of injury. Semester time, aerospace engineering at USC. I wrote the paper from first principles — actual lived experience of the exact phenomenon the syllabus was about.
C-minus. Not because the grammar was wrong. Not because I missed the page count. Because the professor and the TA had no exposure to blue-collar work, so the story didn't land in a register they could feel.
I was more real than they were, but they controlled the rubric.
That's been sitting in me for twenty-five years. And it's the same dynamic — at much higher stakes — that wrote three-fifths of a human into the U.S. Constitution. Whoever controls the narrative frame controls whose lived experience counts. The people with the realest experience get marked down for failing to translate their reality into the gatekeeper's literary dialect.
Empires, religions, kingdoms, corporations — these are all measures of shared story equity. The shared story is the load-bearing infrastructure. Force is just what you use when the story cracks.
I started cynical about a system that pays podcasters more than physicists. I ended convinced the system is right — humans correctly reward whoever transmits world models most efficiently, because that's the actual scarce work. The 20th century paid specialists more for a brief, beautiful, anomalous moment. The 21st century is reverting to type. Story isn't extraction. Story is the foundational technology that made us human in the first place, and the economy is finally remembering that.
What I'm actually frustrated about isn't that storytellers exist. It's that most people have atrophied the skill of constructing their own narratives from experience. They consume world models instead of building them. Education, media, and algorithms have been quietly training that out of people for fifty years. AI doesn't introduce the problem — it makes the dependence obvious.
To be human is to be a storyteller. To be a storyteller is to be human. The economy is just remembering.
This post emerged from a long late-night voice conversation with Claude. The raw transcript and several spin-off threads (story-as-causal-to-homo-sapiens; the convergence-risk in AI-mediated cognition; markets-as-stories-all-the-way-down) are sitting in a drawer. If any of them lit something up for you, let me know and I'll pull the next one.
On prior art for the "story is the substrate" claim — closest hits, ranked:
On the markets-as-narrative pillar:
On LLMs as narrative-completion engines:
Named opponents and partial contradictions:
On passive index flow concentration:
On the SpaceX IPO and the Nasdaq rule changes:
On the dual-class share structure:
Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.
Loading comments...
Published: May 2, 2026 12:46 AM
Last updated: May 2, 2026 2:21 AM
Post ID: 99be590c-dad8-4d63-a2e9-f49c097770e0