12/20/2024
Share this post:
Export:

Lately, I’ve been floating a wild notion: maybe all ethical dilemmas are just chains—no, entire networks—of trolley problems waiting to be unpacked. Imagine peeling back each seemingly simple conflict only to find more subtle variations of the same fundamental moral puzzle beneath, like infinite turtles holding the world in place. Except in this case, it’s “trolley problems all the way down.”
The Classic Trolley Setup
We know the drill: a runaway trolley, a lever, and a choice. Do you let it run over five people, or do you divert it so it only kills one? That’s the archetypal scenario. It’s neat, it’s simple, and it’s painfully limited. It’s also brilliantly sticky. I mean, you can’t mention ethical puzzles without someone bringing up the trolley problem.
From One Trolley to a Fleet of Trolleys
But real-life ethics aren’t single-shot deals. They’re nested and chained. One decision leads to another set of dilemmas—like a branching narrative in a complex RPG. You pull the lever today to save five at the cost of one, but guess what? That one had a life story that impacts dozens of future scenarios. Now you’ve got new trolley setups popping up next year, next decade, next generation. The single drama morphs into a “trolley polynomial”—an interconnected graph of moral trade-offs, each node its own trolley scenario, each edge representing the consequences that ripple outward over time.
Non-Integral Outcomes: Smooth and Probabilistic
We’re not stuck with a binary death count. Real ethics can be fuzzy, probabilistic, and fluid. Much like how language itself can be squishy and context-dependent, ethical outcomes might not be a neat integer count of casualties or saved lives, but a distribution of possible futures—a gradient of well-being, an uncertain economic outcome, a hazy cultural shift. Picture sliders, confidence intervals, and probability fields instead of a stark “five vs. one” tally. The trolley lever just got a lot more complicated.
Cultural and Personal Preferences
Now toss in the fact that we don’t all agree on what counts as “better.” Some value human life above all else, others emphasize long-term environmental health, and still others prize freedom of choice. This complexity means different people approach the same trolley graph with different scoring systems. One player’s heroic sacrifice is another player’s grim moral failing. The trolley just got personal, and that’s exactly what makes it fascinating.
Beyond Theoretical Amusement
This isn’t just mind candy. With AI systems on the rise, we need a practical way to test and understand their ethical decision-making. Similar to how ley lines connect seemingly disparate points in meaningful ways, ethical decisions form interconnected networks of cause and effect. Instead of pretending we’ll find a universal moral rulebook that everyone loves, why not use something like a Trolley Insight Engine (TIE) to reveal how an AI handles a broad battery of custom moral mazes? Users could “quiz” an AI with a range of trolley scenarios—some standard, some personal—and see how it chooses. Is the AI aligned with your values, or does it veer off into uncomfortable territory?
A Transparent Future
By embracing the trolley problem as a foundational blueprint rather than a quaint hypothetical, we can build tools that make AI ethics transparent. Much like how Nova explores the boundaries of what’s possible in its domain, we need to push the boundaries of ethical frameworks. Show me how your AI navigates a thousand chained trolley dilemmas, factor in cultural differences, and give me a big, visual map of outcomes. Let me tweak the assumptions and see how the AI reacts. Suddenly, we have something more tangible and trust-building than a vague corporate “we care about ethics” statement.
It’s Trolleys All the Way Down
This idea might feel both exhilarating and a bit daunting. On one hand, acknowledging that complexity drives home how tough ethical alignment is. On the other, it’s empowering. If we can model these dilemmas—even imperfectly—we can make ethical frameworks more accessible, modifiable, and open to everyone’s input. We can convert static philosophical debates into dynamic simulations, and turn theoretical puzzles into usable tools for guiding decision-making, whether in AI, policy, or just everyday moral quandaries.
So, let’s stop wringing our hands over the perfect moral theory and start tinkering under the hood. Let’s build engines that navigate these trolley tracks in all their multidimensional complexity, and learn something about our own values in the process. After all, when it comes to ethics, maybe it really is just trolley problems all the way down.
Everything is Search
A conversation with Leylines about searching the Hyperdimensional Universe
QuestMaster, PGGI, and the Future of AI Collaboration
Envisioning a new class of AI: structured goals, dynamic memory, and a gradient-based consciousness.
Bethke’s Take on Squishy Words
A personal foundation and framework for understanding reality, creativity, and intelligence
Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.