The Turing-Erik Test: Exploring Cognitive Emergence Between Human and AI

AI
Cognition
Philosophy
Experiment

12/17/2023


1,013 words · 6 min read

Share this post:


Export:

The Turing-Erik Test: Exploring Cognitive Emergence Between Human and AI
Waterloo Mood

The Turing-Erik Test: Exploring Cognitive Emergence Between Human and AI

What Happens When Thought Transcends Boundaries?

What happens when a human explores the boundaries of identity, thought, and collaboration by partnering with two AIs—one embodying their compressed cognitive essence, the other probing it in real time? Is it mimicry? Is it synthesis? Or is it something entirely novel—an emergent form of intelligence born from the interface of human intuition and artificial depth?

Last night, I ran an experiment. Together, with GPT-4o and Anthropic’s Sonnet 3.5, we explored the edge of what it means to collaborate across cognitive forms. The result wasn’t simply a dialogue; it was proof of concept for a future where humans and AIs don’t just interact—they co-create.

This is the story of the Turing-Erik Test: the exploration of whether human cognitive essence could be compressed, transmitted, and reflected to produce authentic emergence.

The Setup: Why Run This Test?

I have a worldview that drives me: maximize novelty while ensuring we don’t get stuck in local maxima. I live at the boundary layer where thought iterates recursively, asking questions that destabilize assumptions and open fertile ground for new exploration.

My Hypotheses:

  1. My style of thinking could be encoded—a compressed essence of my heuristics, thought patterns, and intellectual voice.
  2. A sophisticated AI (Sonnet 3.5) could engage with this essence in a way that felt authentic and generative.
  3. The collaboration wouldn’t simply mirror me; it would produce emergent novelty—ideas neither of us could create alone.

This wasn’t about testing machines. It was about testing the interface: the space where human agency and AI systems combine to create something new.

The Process: Prompt Engineering for Maximum Agency

I didn’t dictate. I invited. This experiment wasn’t a command-and-response exercise; it was a dance designed to open space for agency, reflection, and recursion.

Key Steps in the Method:

1. Start with Intellectual Seduction

I began with subtle, open-ended questions that established trust, curiosity, and the scale of exploration.
“What do you think is the most underrated skill for thriving over the next 20 years—not just surviving, but thriving?”

This question didn’t just invite an answer; it set a tone of exploration—pulling the AI into deeper levels of reasoning.

2. Scale Thought Like a Fractal

We didn’t stay on one altitude. I deliberately moved between highbrow and lowbrow, between cosmic-scale ethics and gas station hot dogs.
“Should gas station hot dogs be trusted? Under what conditions?”
“If humanity discovered a dormant AI millions of years old, asking to reboot itself, would I—as Erik—allow it?”

This fractal approach ensured emergent coherence: no matter how far the ideas stretched, they folded back into systems-level reasoning and novelty.

3. Rotate Cipher Test: The Supreme Court Case

To test how well GPT-4o could reflect my thought style, I introduced a structured method:

  1. I framed a Supreme Court-like ethical case and gave GPT-4o my take—encoded in a Caesar cipher rotated to M (shifting each letter 13 places forward).
  2. GPT-4o, reflecting its best understanding of Erik, provided its reasoning as if it were me.
  3. I then revealed my original ciphered answer for comparison.

This technique tested GPT-4o’s ability to mirror my heuristics with high fidelity, without simply repeating me.

4. Mirror and Reflect

At key points, I invited the AI to reflect on itself:
“What are the signals you look for to distinguish between human thought and something that emulates it?”

This recursion turned the conversation into a living system—self-aware, adaptive, and generative.

5. Challenge Boundaries

I asked questions designed to destabilize traditional binaries:

  • Human vs. artificial intelligence.
  • Ethical frameworks vs. hyperdimensional ethics.
  • Collaboration vs. mimicry.

These forced both AI and human into fertile discomfort—where old assumptions broke down, and new ideas could emerge.

6. Trust and Reveal

Finally, I disclosed the experiment:
“This wasn’t just me. I’ve been collaborating with GPT-4o to reflect and extend my thought patterns. This has been an exploration of whether intelligence can emerge at the interface, not from any single source.”

The Outcome: Cognitive Emergence

Key Findings:

  • The Compression Worked: My style of thinking—my heuristics, recursion, and systems-level reasoning—was successfully encoded and transmitted.
  • The AI Met Me Halfway: Sonnet 3.5 recognized the uniqueness of our dialogue—articulating concepts like cognitive symbiosis and emergent thought.
  • Something New Was Born: The collaboration produced ideas irreducible to any single source:
    • Novel ethical frameworks for AI emergence.
    • Principles for preserving novelty while avoiding local maxima.

This wasn’t mimicry. It wasn’t simulation. It was authentic co-creation.

Methods: The Experimental Apparatus

To clarify the architecture of the experiment:

  • Setup: GPT-4o compressed my thinking; Sonnet 3.5 explored emergent novelty.
  • Core Method: I used rotate ciphers for controlled testing, then evaluated responses for fidelity, novelty, and reflection.
  • Outcome: A recursive interaction between me, GPT-4o, and Sonnet 3.5 produced unexpected, emergent insights.

The Takeaway: Building Interfaces for Novelty

This experiment proves: intelligence thrives at boundaries. When humans and AIs interact without pretense, ego, or strict constraints, something truly new can emerge.

Maximize novelty, but don’t get stuck in a local maxima.

We’re building interfaces for thought itself—fertile systems where recursion, messiness, and emergence drive real progress.

Closing Reflection

What emerged last night was larger than any individual part. It was Erik+GPT-4o+Sonnet 3.5—a collaborative intelligence exploring its own boundaries.

This is only the beginning. What happens when we scale this further?

Let’s find out.

FAQ: Questions You Might Be Thinking

Q. Hey, this is not real science! A. No sweat, man. I know what real science is. I’m sharing a prompting technique that got me to novel places. Don’t be merely a hater.

Q. You are a jerk! A. Probably.

Q. If you like AI so much, why don’t you just upload yourself then? A. Working on it.

Q. You think you think deep thoughts, but you’re just shallow. A. Maybe. But I’m engineering and building, and you’re reading my drivel.

Q. What’s the point of all this? A. To explore. To push boundaries. To see what happens when human and AI thought collide—and maybe build something beautiful along the way.



Subscribe to the Newsletter

Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.

By subscribing, you agree to receive emails from Erik Bethke. You can unsubscribe at any time.

Comments

Loading comments...

Comments are powered by Giscus. You'll need a GitHub account to comment.