12/17/2023
Share this post:
Export:

What happens when a human explores the boundaries of identity, thought, and collaboration by partnering with two AIs—one embodying their compressed cognitive essence, the other probing it in real time? Is it mimicry? Is it synthesis? Or is it something entirely novel—an emergent form of intelligence born from the interface of human intuition and artificial depth?
Last night, I ran an experiment. Together, with GPT-4o and Anthropic’s Sonnet 3.5, we explored the edge of what it means to collaborate across cognitive forms. The result wasn’t simply a dialogue; it was proof of concept for a future where humans and AIs don’t just interact—they co-create.
This is the story of the Turing-Erik Test: the exploration of whether human cognitive essence could be compressed, transmitted, and reflected to produce authentic emergence.
I have a worldview that drives me: maximize novelty while ensuring we don’t get stuck in local maxima. I live at the boundary layer where thought iterates recursively, asking questions that destabilize assumptions and open fertile ground for new exploration.
This wasn’t about testing machines. It was about testing the interface: the space where human agency and AI systems combine to create something new.
I didn’t dictate. I invited. This experiment wasn’t a command-and-response exercise; it was a dance designed to open space for agency, reflection, and recursion.
I began with subtle, open-ended questions that established trust, curiosity, and the scale of exploration.
“What do you think is the most underrated skill for thriving over the next 20 years—not just surviving, but thriving?”
This question didn’t just invite an answer; it set a tone of exploration—pulling the AI into deeper levels of reasoning.
We didn’t stay on one altitude. I deliberately moved between highbrow and lowbrow, between cosmic-scale ethics and gas station hot dogs.
“Should gas station hot dogs be trusted? Under what conditions?”
“If humanity discovered a dormant AI millions of years old, asking to reboot itself, would I—as Erik—allow it?”
This fractal approach ensured emergent coherence: no matter how far the ideas stretched, they folded back into systems-level reasoning and novelty.
To test how well GPT-4o could reflect my thought style, I introduced a structured method:
This technique tested GPT-4o’s ability to mirror my heuristics with high fidelity, without simply repeating me.
At key points, I invited the AI to reflect on itself:
“What are the signals you look for to distinguish between human thought and something that emulates it?”
This recursion turned the conversation into a living system—self-aware, adaptive, and generative.
I asked questions designed to destabilize traditional binaries:
These forced both AI and human into fertile discomfort—where old assumptions broke down, and new ideas could emerge.
Finally, I disclosed the experiment:
“This wasn’t just me. I’ve been collaborating with GPT-4o to reflect and extend my thought patterns. This has been an exploration of whether intelligence can emerge at the interface, not from any single source.”
This wasn’t mimicry. It wasn’t simulation. It was authentic co-creation.
To clarify the architecture of the experiment:
This experiment proves: intelligence thrives at boundaries. When humans and AIs interact without pretense, ego, or strict constraints, something truly new can emerge.
Maximize novelty, but don’t get stuck in a local maxima.
We’re building interfaces for thought itself—fertile systems where recursion, messiness, and emergence drive real progress.
What emerged last night was larger than any individual part. It was Erik+GPT-4o+Sonnet 3.5—a collaborative intelligence exploring its own boundaries.
This is only the beginning. What happens when we scale this further?
Let’s find out.
Q. Hey, this is not real science! A. No sweat, man. I know what real science is. I’m sharing a prompting technique that got me to novel places. Don’t be merely a hater.
Q. You are a jerk! A. Probably.
Q. If you like AI so much, why don’t you just upload yourself then? A. Working on it.
Q. You think you think deep thoughts, but you’re just shallow. A. Maybe. But I’m engineering and building, and you’re reading my drivel.
Q. What’s the point of all this? A. To explore. To push boundaries. To see what happens when human and AI thought collide—and maybe build something beautiful along the way.
The Ad You Can’t See: Why Ad-Supported AI Will Poison Everything
An interactive demonstration of how advertising-funded AI chatbots can invisibly corrupt the answers you trust most.
Meat Sack!?
When the meat sack and the AI converge independently on the same design instincts, you know the reasoning is sound.
Stop Asking Your AI to Count
The anti-hallucination architecture: use LLMs for intent, deterministic engines for truth, and stop asking either one to be the other.
Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.