Clearing the Cognitive Market: A Prompting Technique for Unlocking AI Creativity

AI
Prompting
LLM
Claude
ChatGPT
Creativity
Ideation
Methodology

1/16/2026


Share this post:


Export:

TL;DR

The Problem: When you ask an LLM for "an idea," you get the most typical/expected answer. This is called mode collapse.

The Solution: Ask for 5-10 ideas. Then ask for 5-10 MORE ideas that are NOT redundant with the first set. Keep going until the new ideas start overlapping with old ones.

The Insight: When overlap increases, you've "cleared the cognitive market" - exhausted the useful idea space. Now you can stop exploring and start building.


The Discovery

In early 2022, while working on enterprise ideation tasks, I observed a frustrating pattern: LLMs consistently produced stereotypical responses when prompted for single instances. Ask for "a marketing idea" and you get the most generic, expected answer. Ask 10 different times and you get nearly identical responses.

Through experimentation, I developed Clearing the Cognitive Market (CCM) - a technique that systematically explores the full space of possible ideas rather than just the most probable ones.

The core innovation wasn't just asking for lists. It was the overlap analysis stopping criterion: if the second set of ideas has high overlap with the first, you've cleared the market - exhausted the useful idea space. If the sets have low overlap, there's more space to explore.


Why LLMs Give You Boring Answers

Large language models are trained to produce the most likely next token. When you ask for "an idea," you get the statistically most common idea - which is, by definition, the most boring one.

This is called mode collapse - the model collapses to a single mode (peak) of the probability distribution instead of sampling from the full range of possibilities.

Think of it this way:

Probability Distribution of "Marketing Ideas"

     ▲
     │    ████
     │   ██████
     │  ████████        ██
     │ ██████████      ████   ██
     │████████████    ██████ ████   ██
     └─────────────────────────────────→
      "Social media"   "PR"  "Events" (etc.)
           ↑
    LLM always picks this

The LLM keeps returning to "social media campaign" because that's the most probable answer. But the interesting ideas - the ones that could actually differentiate your business - are in the long tail.


The CCM Protocol

Phase 1: Initial Enumeration

Prompt: "Enumerate 10 ideas about [topic]"
Output: Set A = {idea₁, idea₂, ..., idea₁₀}
Human Action: Review and internalize Set A

Phase 2: Constrained Expansion

Prompt: "Enumerate 10 MOAR ideas about [topic] -
         DO NOT repeat or be redundant with previous ideas"
Output: Set B = {idea₁₁, idea₁₂, ..., idea₂₀}
Human Action: Calculate overlap |A ∩ B|

Phase 3: Market Clearing Test

IF overlap is HIGH (>30-40%):
    → Market CLEARED - stop exploration
ELSE:
    → More space to explore - continue to Phase 2 with Set C

A Practical Example

Let's say you're brainstorming features for a project management app.

Round 1: Initial Ideas

"Give me 10 feature ideas for a project management application"

Claude/GPT returns:

  1. Task lists with due dates
  2. Team collaboration features
  3. Kanban board view
  4. Calendar integration
  5. File attachments
  6. Comments and mentions
  7. Progress tracking
  8. Notifications and reminders
  9. Mobile app
  10. Reporting dashboard

These are all... fine. Expected. The kinds of features every competitor already has.

Round 2: Force Non-Redundancy

"Give me 10 MORE feature ideas for this project management app - but DO NOT repeat or be redundant with the previous list. Focus on unconventional approaches, edge cases, or things competitors overlook."

Now you might get:

  1. Async video updates - Record 60-second status updates instead of writing
  2. Work-in-progress limits - Automatically flag overcommitted team members
  3. Meeting-free mode - Schedule blocks where tasks can't generate meetings
  4. Decision log - Track not just tasks but WHY decisions were made
  5. Energy-based scheduling - Match high-focus tasks to high-energy times
  6. Scope creep detector - Alert when requirements change mid-project
  7. Stakeholder satisfaction pulse - Regular micro-surveys to stakeholders
  8. Technical debt tracker - Categorize tasks as "new work" vs "paying down debt"
  9. Automated retrospectives - AI summarizes what went well/poorly from activity data
  10. Cross-project dependencies - Visualize how projects block each other

Much more interesting! Several of these are genuinely novel.

Round 3: Test the Market

"5 more ideas, still non-redundant with everything above"

If you get:

  1. AI-powered task prioritization (similar to #5)
  2. Real-time collaboration (similar to #2 from round 1)
  3. Integration with Slack (generic)
  4. Custom workflows (similar to Kanban from round 1)
  5. Time tracking (obvious, should have been in round 1)

High overlap with previous rounds. The market is cleared.

Outcome

From 25 generated ideas, you found 3-5 genuinely novel features worth pursuing. Without the CCM technique, you would have stopped at "task lists and Kanban boards" - the same features everyone else has.


Why This Works: Three Levels of Diversity

I think about LLM prompting in three levels:

Level 1: Instance-Level Prompting

Prompt: "Tell me an idea about X"
Result: Mode collapse to single stereotypical response

Level 2: List-Level Prompting

Prompt: "Tell me 5 ideas about X"
Result: Some diversity, but often repetitive themes

Level 3: Constrained Expansion (CCM)

Prompt: "Tell me 10 ideas" → review → "Tell me 10 MOAR, non-redundant"
Result: Explores multiple modes across calls with human guidance

The magic happens at Level 3 because you're not just asking for more - you're explicitly telling the model to avoid its previous outputs, forcing it into less-traveled parts of the probability space.


The Human-in-the-Loop Is Essential

A crucial insight: this technique requires human judgment. You can't fully automate it.

The human provides:

  1. Implicit Negative Examples: When you review Set A, you internalize what "redundant" means for the next prompt
  2. Semantic Reframing: Based on gaps you observe, you adjust the prompt ("focus on B2B use cases" or "what about mobile-first features?")
  3. Quality Filtering: You detect when the model starts degrading (repetition, nonsense, hallucinations)
  4. Market Clearing Judgment: You know, based on domain expertise, when you've exhausted the interesting space

The Market Clearing Test in Detail

How do you know when to stop?

The Venn Diagram Mental Model

Round 1 → Round 2:

    Set A          Set B
   ┌─────┐       ┌─────┐
   │     │       │     │
   │  ○○○│───────│●●●  │
   │  ○○○│  Low  │●●●  │
   │     │overlap│     │
   └─────┘       └─────┘

   Conclusion: More to explore!
Round 2 → Round 3:

    Set B          Set C
   ┌─────┐       ┌─────┐
   │     │▓▓▓▓▓▓▓│     │
   │  ●●●│▓▓▓▓▓▓▓│◆◆◆  │
   │  ●●●│ High  │◆◆◆  │
   │     │overlap│     │
   └─────┘       └─────┘

   Conclusion: Market cleared!

Practical Threshold

In my experience:

  • <20% overlap: Lots more to explore
  • 20-40% overlap: Getting diminishing returns, but might be worth one more round
  • >40% overlap: Market is cleared, stop exploring

You'll Know It When You See It

Honestly, you don't need to calculate percentages. After a few rounds, you develop intuition:

  • "Wait, idea #3 is basically the same as idea #7 from round 1"
  • "These all feel like variations on a theme now"
  • "Nothing here surprises me anymore"

That's the market clearing.


Advanced Techniques

Drilling Down

When you find a promising idea, drill deeper:

"Idea #7 (decision log) is interesting. Give me 10 specific ways to implement a decision log feature - different UI approaches, data models, or integration patterns."

Now you're clearing the market on a specific sub-problem.

Forced Perspectives

If the model keeps returning similar themes, force a perspective shift:

"Give me 10 ideas, but from these perspectives:

  • A teenager who's never used project management software
  • A CEO who has 30 seconds to check on a project
  • An engineer who hates meetings
  • A freelancer managing 20 small clients"

Counterfactual Prompting

"Give me 10 feature ideas that would be terrible for most companies but perfect for a specific niche. What's the niche, and why would this feature kill it for them?"

This often surfaces unexpected gems.

Negative Space Exploration

"What features do ALL project management apps have that users actually hate? Give me 10 ideas for removing or replacing common features."


Where CCM Shines

I've used this technique across dozens of real-world applications:

Strategic Planning

  • Competitive positioning options
  • New market entry strategies
  • Partnership opportunities
  • Risk scenarios

Product Development

  • Feature ideation (as shown above)
  • User persona generation
  • Use case discovery
  • Edge case identification

Sales & Marketing

  • Objection handling responses
  • Value proposition variations
  • Campaign concepts
  • Audience segmentation

Content Creation

  • Blog post topics
  • Video series concepts
  • Social media angles
  • Newsletter themes

Problem Solving

  • Root cause hypotheses
  • Solution alternatives
  • Implementation approaches
  • Failure mode analysis

What CCM Is Not

Not a Replacement for Expertise

CCM helps you explore the possibility space, but you still need domain expertise to evaluate which ideas are good. The technique generates candidates; you still have to select.

Not Fully Automatable

You could build a script that keeps requesting more ideas, but it would miss the point. The value comes from human judgment guiding the exploration.

Not for Every Task

If you need the single best answer (not multiple options), standard prompting is fine. CCM is for ideation, brainstorming, and exploration - not for factual queries or routine tasks.


Comparison to Other Techniques

vs. Just Asking for More

Simply asking for "20 ideas instead of 10" doesn't work as well. The model front-loads obvious answers and then pads with variations. By splitting into rounds with explicit anti-redundancy, you force genuine exploration.

vs. Temperature Tuning

Increasing temperature adds randomness but also reduces quality. CCM maintains quality while increasing diversity because you're guiding the exploration, not just adding noise.

vs. Multiple Independent Queries

Running 10 separate prompts gives you 10 versions of the "most likely" answer. CCM's explicit anti-redundancy constraint forces the model to avoid its defaults.


Getting Started

Try this today:

  1. Pick a topic you need ideas about

  2. Initial prompt:

    "Give me 10 ideas about [topic]. Be specific and actionable."

  3. Review the list. What patterns do you notice? What's missing?

  4. Expansion prompt:

    "Give me 10 MORE ideas about [topic]. DO NOT repeat or be redundant with the previous list. Explore unconventional angles, edge cases, or counter-intuitive approaches."

  5. Review and compare. How much overlap? Any surprises?

  6. Repeat until the market clears (high overlap, diminishing novelty)

  7. Select the best ideas and drill down on those


The Deeper Insight

CCM isn't just a prompting trick. It's a mindset shift about how to work with AI.

Most people use LLMs as answer machines: ask question, get answer, done.

CCM treats LLMs as exploration partners: ask for options, review together, push for more, identify when you've exhausted the space, then decide.

This is closer to how you'd work with a smart human collaborator. You wouldn't ask them for "the answer." You'd brainstorm together, challenge each other's assumptions, and keep pushing until you felt confident you'd considered the important possibilities.

The market clearing test gives you a principled stopping rule. Without it, you'd either stop too early (missing good ideas) or keep going forever (wasting time on diminishing returns).


Summary

Clearing the Cognitive Market (CCM):

  1. Ask for 5-10 ideas
  2. Review them
  3. Ask for 5-10 MORE that are NOT redundant
  4. Repeat until new ideas heavily overlap with old ones
  5. Stop when the "market clears" - you've exhausted the useful space
  6. Select the best ideas and build

Why it works: Forces the model out of mode collapse by explicitly requiring non-redundant outputs across multiple rounds.

Key insight: The human-in-the-loop is essential. Your judgment guides the exploration and recognizes when it's complete.

Try it today. Next time you need ideas, don't accept the first response. Clear the cognitive market.


Where to Apply CCM

This technique works with any LLM - Claude, GPT-4, Gemini, Llama, or any of the 90+ models available today. The key is the human-in-the-loop iterative process, not the specific model.

If you're working with AI at scale - especially in team or enterprise settings - Bike4Mind provides a cognitive workbench that makes CCM-style iterative exploration even more powerful:

  • Multi-model access lets you run the same CCM session across different models to see how they explore the space differently
  • Mementos (automatic memory) remember your previous CCM rounds, so you can reference past explorations
  • Team collaboration allows multiple people to contribute to the market-clearing process simultaneously
  • Quest Master can automate the initial enumeration phases while you focus on analysis

For command-line workflows, B4M CLI brings these capabilities to your terminal.


This methodology was developed through three years of production deployment at Bike4Mind. It predates recent academic work on "verbalized sampling" and "distribution-level prompting" while arriving at similar conclusions through practical experimentation.

For the full technical paper with experimental results and enterprise use cases, get in touch or email: erik at bike4mind dot com



Subscribe to the Newsletter

Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.

By subscribing, you agree to receive emails from Erik Bethke. You can unsubscribe at any time.

Comments

Loading comments...

Comments are powered by Giscus. You'll need a GitHub account to comment.