How Claude Code reveals the next paradigm of cognitive work - and why the cloud is about to become the consumption layer, not the workbench
Share this post:
Export:

Thursday, February 5th, 2026
Myself and all of the 15+ year veteran software developers that I know are absolutely in love with Claude Code.
We all have at least 3 simultaneous sessions going at the same time, and we get together quietly and tee-hee and smile.
Projects that we have always wanted to do we now quietly and casually crush.
Weekends, nights, holidays — CC.
Overworking is the problem.
The principal full-stack systems engineer? Their god status now augmented by CC is unassailable.
Even as CC and other AI systems gain new powers, these full-stack full-system engineers will only tackle more and more challenging problems.
The ceiling rises with the floor.
I do believe that there will be a vast destruction of lower-end, light value-add software engineering and software engineering adjacent roles — junior devs, QA, project managers, product managers, engineering management, heck even directors of engineering — all of these roles as they are currently practiced will become obsolete.
More positively though, those that are willing to embrace the upskilling will be further augmented and do Moar™ than they ever conceived. There might also be much more "sidecar" just-in-time bespoke personalized training — learning exactly what you need when you need it, guided by AI that understands your specific context.
The new economy will have work for humans that can add value through:
The future of software, AI, and the whole economy is actually here and available for anyone to see with Anthropic's Claude Code — and notably, not from the current world leader of AI products, OpenAI.
First, what is Claude Code?
At the heart of Claude Code is an application installed on your own local machine — not a web or mobile application. This is the first critical step. ChatGPT is a web app.
Being local on your machine delivers several phase-shift advantages that once you experience, you cannot go back to a web-based product willingly. This is coming from a guy whose flagship product Bike4Mind started as a fast-follow of ChatGPT — a web-based AI assistant.
Here's what I realized I was doing without having words for it:
The Old Model (Web 2.0 SaaS):
The New Model (Agentic Edge):
The key insight: The cloud inverts from being where work happens to being where finished work is published.
This is why Claude Code feels different than ChatGPT — it's not just "better AI." It's architecturally positioned correctly for this shift. ChatGPT is still a destination you visit. Claude Code is a power tool that lives in your environment.
The web becomes the distribution layer — APIs for other agents to consume, traditional UX for humans to view finished work. But the creative act, the synthesis, the building — that happens at the edge, locally, with full power.
Let me tell you about something I did recently that crystallized this for me.
I pay Carta $3,000 per year to manage my company's cap table. It's a web app. I log in, I click around, I'm a tenant in their system.
One weekend I decided to see if I could replace it.
Here's what the workflow looked like:
I took 68 screenshots of every screen in Carta — every dropdown, every modal, every edge case I could find. I exported my actual cap table data to Excel files. I pulled all of this into a local directory.
I pointed CC at those screenshots and said "reverse engineer this."
What emerged was remarkable. CC analyzed the screenshots and produced a comprehensive technical design document — complete system architecture, database schemas, API designs, all 10 major functional modules documented. The kind of document that would take a senior architect weeks to produce.
Then we built it. Real code. Real database. Real deployment.
The result: a working cap table management system deployed to AWS, managing my actual Million on Mars equity data, saving me $2,940 per year. (Carta: $3,000/year. My replacement: ~$60/year in AWS costs. The math is not subtle.)
This wasn't "AI helping me code." This was something categorically different.
I had:
At no point was I constrained by a chat window's context limit. At no point did I have to copy-paste code back and forth. At no point was the AI a visitor in my environment — it was a resident, with the same access I have.
I was doing agentic-local creation without having a name for it.
The web (Carta) was the source of artifacts I pulled down. The web (AWS) was the destination for the finished product I pushed up. But the actual work — the analysis, the design, the implementation, the iteration — all happened locally with full system access.
This pattern isn't limited to code. It's the future of all high-value cognitive work.
Research: Pull papers, datasets, sources → Synthesize, analyze, connect with AI → Push finished analysis, visualizations, reports
Design: Pull inspiration, brand assets, constraints → Iterate, generate, refine with AI → Push final deliverables
Writing: Pull research, interviews, data → Structure, draft, edit with AI → Push published content
Legal: Pull precedents, contracts, regulations → Analyze, draft, review with AI → Push finished documents
Finance: Pull market data, filings, models → Analyze, model, project with AI → Push reports and recommendations
The common pattern: Pull raw materials down, do creative synthesis locally with AI collaborators who have real system access, push finished artifacts up.
Web 1.0 was read-only. We consumed content created by others.
Web 2.0 made everyone a creator — but we created inside platforms. Your tweets live on Twitter. Your docs live in Google. Your code lives in GitHub's web editor. You're always a tenant.
Web 3.0 (the crypto version) promised ownership but delivered speculation.
What's actually emerging is something different:
The web as artifact repository and distribution layer. You pull resources down, you work locally with full power, you push finished work up. The web stores and serves; your local machine creates.
This is closer to how pre-web computing worked — but now with AI collaborators that make local creation enormously more powerful, and with web infrastructure that makes distribution trivially easy.
I've used every AI coding tool. GitHub Copilot. Cursor. ChatGPT with code interpreter. Gemini. Various open-source attempts.
Claude Code is different because Anthropic understood the architectural insight:
The AI needs to be a resident of your system, not a visitor.
Copilot — Autocomplete. Suggesting tokens, not understanding your project.
Cursor — IDE-embedded. Sandboxed — sees only what the IDE sees.
ChatGPT — Web app. You visit it, copy-paste to it — it's a destination.
Claude Code — Local terminal. Full file system, shell, git — it's there.
This architectural choice has compounding advantages:
Will others copy this architecture? Certainly. OpenAI will ship a local tool. Google will ship a local tool. But Anthropic got there first and has been iterating while others are still treating the web browser as the primary interface.
There will likely be a GUI version of Claude Code soon. That's fine.
The key insight isn't "CLI good, GUI bad" — it's:
Local-first with real system access.
A local GUI application with file system access, shell integration, and git awareness would preserve the architectural advantages. The terminal is incidental; the locality is essential.
What won't work is trying to bolt these capabilities onto a web app. The security model of browsers prevents the kind of deep system integration that makes this paradigm powerful. You can't give a web page real file system access, real shell access, real environment variable access. The browser sandbox exists for good reasons, but it means browser-based AI will always be a visitor, never a resident.
If you're a knowledge worker and you're still doing all your work in web applications — logging into SaaS tools, working in browser tabs, treating the cloud as your workbench — you're about to be outcompeted by people who figured out the pull-work-push pattern.
William Gibson's famous quote applies perfectly. The future of cognitive work is already here. A small number of people — mostly veteran engineers, but increasingly spreading to other knowledge workers — are already operating in the new paradigm.
They're not waiting for permission. They're not waiting for someone to build them a tool. They're pulling down artifacts, working locally with AI collaborators, and pushing up finished work that wasn't possible before.
The 15+ year veterans I know who are tee-heeing about Claude Code? They're not just having fun (though they are). They're living in 2030 while everyone else is still logging into web apps.
The Carta teardown I did? That's not a stunt. That's what Saturdays look like now.
Pull, work, push.
Create things that weren't possible before.
Ship.
Erik Bethke is a 30-year veteran of the software industry and founder of Bike4Mind. He currently has 4 Claude Code sessions running.
Stop Asking Your AI to Count
The anti-hallucination architecture: use LLMs for intent, deterministic engines for truth, and stop asking either one to be the other.
Zero to Hero: Building with Claude Code
A complete beginner's guide to setting up your development environment and building your first project with Claude Code. Covers Mac and Windows, ...
Claude Code as My GitHub Project Manager: 35 Issues Triaged in Minutes
How Claude Code helped me triage 35 GitHub issues, close 9 completed features, create app labels, and build a ghetto newsletter system - all while shi...
Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.
Published: February 6, 2026 3:48 AM
Last updated: February 10, 2026 3:10 AM
Post ID: 7de5c27b-1b08-46de-aefa-658328b33872