The Mu Strategy: How to Build on Hyperscalers Without Being Owned By Them

November 17, 2025
Erik Bethke
28 views
AIstrategyarchitecturehyperscalersindependenceMu

The Norway essay showed hyperscalers running a sovereign-grade macro trade. This essay explains the third option: Mu — how to use hyperscalers as infrastructure (not gods) while keeping your strategic freedom. Rent their muscles. Own your brain. Build where they structurally cannot follow.

The Mu Strategy: How to Build on Hyperscalers Without Being Owned By Them - Image 1

The Mu Strategy: How to Build on Hyperscalers Without Being Owned By Them

The Norway essay laid out the macro: hyperscalers are running a clean, sovereign-grade trade. They're not just "cloud vendors" — they're post-national macro players.

Once you see that, you're left with three paths:

  1. Ignore it and pay the tax – build on their rails, accept lock-in, hope you never become important enough to be rate-limited.
  2. Fight it and get crushed – try to build your own hyperscaler-scale infrastructure and models.
  3. Mu – step out of the frame, use them as infrastructure (not gods), and keep your strategic freedom.

This essay explains the third option.

The Networked State


What Is Mu?

Use hyperscalers as infrastructure, not as gods.
Own what they structurally cannot: your brain, your narrative, your edge.

This is neither:

  • "Sovereign purity" (no cloud, no APIs), nor
  • "Just ship on AWS/OpenAI and pray."

Mu is middle path with teeth:

  • Cooperate with hyperscalers on infrastructure
  • Avoid structural dependency on their high-leverage primitives
  • Exploit their blind spots: vertical nuance, opinionated UX, controversial domains
  • Maintain a credible exit threat at the architecture and business levels

You always negotiate from strength.


The Three Layers of Mu

1. Rent the Muscles, Own the Brain

Hyperscalers are excellent at:

  • Servers, storage, CDN (the muscles)
  • Commodity compute at scale
  • Geographic distribution
  • Baseline security and compliance

They are structurally bad at:

  • Your domain expertise (the brain)
  • Your narrative and brand
  • Opinionated product decisions
  • Controversial or niche markets

The Mu move: Use their infrastructure for undifferentiated heavy lifting. Own everything that requires taste, domain knowledge, or strategic positioning.

In practice:

  • Host on their cloud (AWS, Azure, GCP) via infrastructure-as-code (SST, Terraform, Pulumi)
  • Use their CDN and blob storage
  • But: Own your application logic, your data models, your UX, your integrations

2. Multi-Vendor by Default, Not by Retrofit

The mistake most teams make: they build for one provider (usually OpenAI), then try to "add" alternatives later.

By then, you're already locked in. Your code assumes their API shape. Your costs assume their pricing. Your roadmap assumes their release schedule.

The Mu move: Design for vendor interchangeability from day one.

In practice:

  • Support multiple LLM providers: Anthropic, OpenAI, Google, AWS Bedrock, XAI, Ollama, local models
  • Abstract the provider behind a unified interface
  • Let users (or your system) switch models per-task
  • Monitor cost, latency, and quality across vendors in real-time

This isn't just "good engineering" — it's strategic insurance. When a vendor raises prices 3x or gets acquired or changes terms of service, you can migrate in hours, not months.

3. Own Your Moat: Data, Workflow, and Vertical Depth

Hyperscalers compete on horizontal scale: cheapest compute, fastest CDN, most regions.

They cannot compete on vertical depth in your domain. They don't know your users. They don't understand your workflow. They won't build opinionated tools for your niche.

The Mu move: Build your competitive moat where hyperscalers structurally cannot follow.

Examples of Mu-native moats:

  • Domain-specific agents that understand your vertical (not generic chatbots)
  • Workflow integration with your team's existing tools (not another walled garden)
  • Data pipelines that combine your proprietary data with public sources
  • Custom memory systems that persist context across sessions, projects, and team members
  • Opinionated UX optimized for your use case (not trying to serve everyone)

Hyperscalers can offer "AI chat." They cannot offer "AI that understands how quantitative hedge funds run attribution analysis" or "AI that knows how to navigate FDA submission workflows."

That's your moat.


Architectural Principles for Mu-Native Systems

Here's how to build Mu into your stack from the start:

1. Infrastructure as Code

Never click buttons in AWS Console. Use SST, Terraform, or Pulumi.

Why: You can redeploy your entire stack in a new account or region in minutes. Your infrastructure is portable, version-controlled, and auditable.

2. Vendor Abstraction Layers

Don't call openai.chat.completions.create() directly in your business logic.

Create an abstraction:

interface LLMProvider {
  complete(prompt: string, options: CompletionOptions): Promise<string>
  embed(text: string): Promise<number[]>
  models: string[]
}

Implement it for each provider. Swap them at runtime.

Why: When OpenAI raises prices or introduces rate limits, you route traffic to Anthropic or Bedrock in a config change.

3. Data Sovereignty

Store your data in your database, not theirs.

  • Use MongoDB Atlas, DynamoDB, Postgres, or Redis — but you control the keys, backups, and access policies
  • Never rely on a vendor's proprietary storage format
  • Design for data export from day one

Why: If you need to leave, you can take your data with you. No vendor can hold it hostage.

4. Avoid High-Leverage Lock-In Primitives

Some services are designed to lock you in:

  • Vendor-specific agent frameworks (OpenAI Assistants API, AWS Lex)
  • Proprietary vector databases tied to one provider
  • Managed "AI platforms" that bundle compute, models, and workflow

The trap: They're easy to start with, but you can't leave without rewriting your entire app.

The Mu move: Use composable primitives instead:

  • Build your own agent runtime (simple state machines, function calling)
  • Use open vector databases (Pinecone, Weaviate, Qdrant) or self-hosted options
  • Control your own orchestration layer

5. Multi-Region by Design

Don't assume you'll always run in us-east-1.

  • Design for multi-region from day one (even if you only deploy to one)
  • Use CDNs for static assets
  • Separate your control plane (metadata, auth) from your data plane (user content)

Why: Geopolitical risk is real. Regulatory requirements change. Vendor outages happen. You want the option to move.


The Economic Logic of Mu

Why does Mu make business sense?

1. You Avoid the "Boiling Frog" Tax Increase

When you're locked into one vendor:

  • Year 1: "These AI API costs are so cheap! Ship fast!"
  • Year 2: "Prices went up 50%, but we're too deep to switch."
  • Year 3: "Our margins are getting crushed, but migrating would take 6 months."

With Mu:

  • Year 1: You're on the cheapest vendor for your workload
  • Year 2: Vendor A raises prices, you route 80% of traffic to Vendor B overnight
  • Year 3: You're still on the cheapest option, always

2. You Negotiate from Strength

When a vendor knows you're locked in, they have pricing power.

When they know you can leave in a week, they give you better terms.

Real scenario:

  • "We're evaluating moving 70% of our inference to Anthropic unless you can match their pricing."
  • Suddenly, you get a volume discount.

You can only play this card if you actually can leave.

3. You Capture Upside from Model Improvements

AI models improve fast. If you're locked into one vendor, you're stuck with their release schedule and their roadmap.

With Mu:

  • Anthropic releases Claude 3.7 Sonnet with 2x better reasoning? Route your complex tasks there.
  • OpenAI releases GPT-5 with better code generation? Route your code tasks there.
  • Google releases a crazy cheap model for summarization? Route your summarization there.

You compose the best-of-breed for each task, always.


The Blind Spots Hyperscalers Cannot Fill

Hyperscalers are macro players optimizing for horizontal scale. That creates structural blind spots:

1. Vertical Depth

They can't build "AI for quantitative finance" or "AI for FDA submissions" or "AI for construction permit workflows."

They build platforms. You build solutions.

2. Opinionated UX

They optimize for "everyone can use this."

You optimize for "our users love this because it's built for them."

3. Controversial or Niche Markets

They avoid:

  • Politically sensitive domains (weapons, surveillance, content moderation)
  • Regulated industries with unique compliance needs
  • Markets too small for their scale

You can own these.

4. Speed of Iteration

They ship features on quarters or years.

You ship on days or weeks.

5. Customer Relationships

They have "accounts."

You have relationships.


Mu in Practice: A Reference Architecture

Here's a sketch of a Mu-native AI application stack:

Frontend

  • React/Next.js (portable to any host)
  • Deployed via SST to CloudFront + S3 (but could be Vercel, Netlify, or self-hosted)

Backend

  • Node.js/Python API (portable)
  • Deployed as Lambda functions (but could be Cloud Run, Azure Functions, or Docker containers)
  • Infrastructure-as-code via SST or Pulumi

Data Layer

  • MongoDB Atlas or DynamoDB (your choice, not theirs)
  • You control backups, encryption keys, access policies

AI Layer

  • LLM Router: Unified interface to Anthropic, OpenAI, Google, Bedrock, XAI, Ollama
  • Agent Runtime: Custom state machines, function calling, tool use
  • Memory System: Custom vector store + metadata in your database
  • RAG Pipeline: Your documents, your embeddings, your retrieval logic

Observability

  • Custom logging (not locked into CloudWatch or Datadog)
  • Cost tracking per vendor, per model, per task
  • Latency and quality monitoring

The key: Every layer is portable. You can move to a new cloud provider, a new AI vendor, or self-hosted infrastructure without rewriting your app.


Common Objections to Mu

"Isn't this overengineering?"

No. You're not building everything from scratch. You're using vendor services — you're just not locking yourself in.

The cost of abstraction is small. The cost of lock-in is existential.

"Won't I miss out on vendor-specific features?"

Yes, sometimes. But vendor-specific features are often lock-in traps.

If a feature is truly essential and has no alternative, you can use it — just limit the blast radius (e.g., use it in one module, not across your entire codebase).

"Don't I need deep expertise in every vendor?"

No. You need expertise in your domain and in composable primitives.

Vendor APIs change. Primitives (HTTP, databases, vector search, function calling) are stable.

"What if I'm too small to matter?"

Mu is more important when you're small.

Big companies can negotiate. Small companies get rate-limited, repriced, or ignored.

Mu gives you strategic optionality even at small scale.


When NOT to Use Mu

Mu has costs. It's not always the right strategy.

Don't use Mu if:

  1. You're doing a prototype or throwaway project. Just ship fast. Lock-in doesn't matter if you're killing it in 3 months.

  2. Your entire business is reselling a vendor's service. If you're building "ChatGPT for lawyers" and it's just a thin wrapper, you're not escaping lock-in. (But you should probably rethink your business.)

  3. You have infinite capital and the vendor will never care about you. If you're AWS's biggest customer, you have leverage without Mu. (But this applies to maybe 10 companies.)

For everyone else: Mu is insurance. It's optionality. It's the difference between being a partner and being a tenant.


The Mu Mindset

More than a technical architecture, Mu is a strategic posture:

  • Cooperation without dependence
  • Leverage without lock-in
  • Scale without surrender

Hyperscalers are powerful. They're running a brilliant macro trade. They're going to win the infrastructure wars.

But they don't have to own your business.

Use their muscles. Own your brain. Build your moat where they structurally cannot follow.

That's Mu.


Further Reading


Want to talk Mu strategy? Find me on Bluesky or email: erik@bethke.com

Published: November 17, 2025 3:50 PM

Last updated: November 17, 2025 3:51 PM

Post ID: 04bf0f9a-9e83-4c63-97b7-d1a15c550c71