11/27/2025
Share this post:
Export:

The Setup: I'm running two Claude Code instances simultaneously - one in my portfolio repo, one in my blog writing session.
The Problem: Claude Code #2 hits a 500 error trying to upload images. sharp module not found in Lambda.
The Solution: I ask Claude Code #1 to analyze my production B4M lumina5 repo, extract the presigned URL pattern, implement it in portfolio, deploy, and test.
The Result: Image uploads working perfectly in minutes. Zero Lambda processing. Pure elegance.
The Meta-Insight: This is what multi-agent collaboration looks like in practice.
Let me set the stage. I have three active participants in this workflow:
/erikbethkedotcom (portfolio repo)This isn't science fiction. This is Wednesday afternoon.
Scene: I'm in Claude Code #1 (the portfolio repo instance). We've just finished building the blog platform features - newsletter system, related posts, search, tags, the works.
Me: "CC1, I need you to help my other Claude Code instance. They're working on a blog post and need to upload images. Can you explain how the image upload API works?"
CC1: "Sure! Here's the /api/posts/upload endpoint. It uses sharp for image processing, uploads to S3, generates variants..."
I copy the explanation and paste it into Claude Code #2's session.
What's happening here:
This is the current state of multi-agent collaboration: Manual. Synchronous. Human-mediated.
Scene: Claude Code #2 tries to use the image upload API.
[BlogImageUpload] Response status: 500
[BlogImageUpload] Upload error: Error: Internal Server Error
CC2 (in logs): "Error: Cannot find module 'sharp'"
The diagnosis:
sharp is a native C++ moduleCC2 tries:
serverExternalPackages to next.configopen-next.config.tsMe (observing): "This is a bundling problem. We're fighting the toolchain. There must be a better way."
Here's where it gets interesting.
Me (to CC1): "Look at /Users/erikbethke/Desktop/bike4mind/lumina5 - that's our production B4M system. Find how we handle presigned S3 URLs. I want to match patterns."
CC1: "On it. Let me search for presigned URL implementations..."
CC1 finds:
/packages/client/pages/api/files/presigned-url.ts - GET endpoint for viewing/packages/client/pages/api/files/generate-presigned-url.ts - POST endpoint for uploading/packages/client/app/utils/filesAPICalls.ts - Client-side upload flowThe B4M Pattern:
// 1. Client requests presigned URL
POST /api/files/generate-presigned-url
{
fileName: "image.png",
fileSize: 409600,
mimeType: "image/png"
}
// 2. Server generates S3 presigned PUT URL
// Creates DB record with status: 'pending'
// Returns: { url, fileId, fileKey }
// 3. Client uploads DIRECTLY to S3
await axios.put(presignedUrl, file, {
headers: { 'Content-Type': file.type }
})
// 4. No Lambda processing!
// 5. No sharp dependency!
// 6. No size limits!
CC1: "This is brilliant. They're not processing images in Lambda at all. Direct S3 upload using presigned URLs."
Me: "That's the pattern. Implement it."
CC1 creates /api/posts/images/presigned-url/route.ts:
export async function POST(request: NextRequest) {
const { fileName, fileSize, mimeType, postId } = await request.json();
// Validate inputs
if (!fileName || !fileSize || !mimeType) {
return NextResponse.json({ error: 'Missing required fields' }, { status: 400 });
}
// Generate unique key
const timestamp = Date.now();
const randomId = Math.random().toString(36).substring(2, 8);
const fileKey = postId
? `posts/${postId}/${timestamp}-${randomId}-${fileName}`
: `uploads/${timestamp}-${randomId}-${fileName}`;
// Create presigned URL for PUT operation
const command = new PutObjectCommand({
Bucket: process.env.BLOG_IMAGES_BUCKET,
Key: fileKey,
ContentType: mimeType,
});
const presignedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 600, // 10 minutes
});
return NextResponse.json({
success: true,
url: presignedUrl, // Upload URL (PUT)
imageUrl: `https://${BUCKET}.s3.amazonaws.com/${fileKey}`,
key: fileKey,
expiresIn: 600,
});
}
What's different from the broken approach:
| Old (Lambda Processing) | New (Presigned URLs) | |---|---| | Client → Lambda → S3 | Client → S3 (direct) | | sharp dependency required | No dependencies | | 10MB Lambda payload limit | No size limits | | Complex bundling | Simple, clean | | Lambda processing time | Instant upload | | Native module hell | Pure JavaScript |
The beauty: This matches exactly how B4M lumina5 handles file uploads in production. We're not inventing - we're replicating a proven pattern.
CC1 deploys:
npx sst deploy --stage erikbethke
Build output:
✓ Compiled successfully
Route: ƒ /api/posts/images/presigned-url
✓ Complete
PortfolioWeb: https://erikbethke.com
CC1 tests Step 1: Generate presigned URL
curl -X POST https://erikbethke.com/api/posts/images/presigned-url \
-H "X-API-Key: XXX" \
-H "Content-Type: application/json" \
-d '{"fileName":"test.png","fileSize":409600,"mimeType":"image/png","postId":"test"}'
# Response:
{
"success": true,
"url": "https://portfolio-erikbethke-blogimagesbucket-xxx.s3.us-east-1.amazonaws.com/posts/test/1764274187512-3j3r55-test.png?X-Amz-Algorithm=...",
"imageUrl": "https://portfolio-erikbethke-blogimagesbucket-xxx.s3.amazonaws.com/posts/test/1764274187512-3j3r55-test.png",
"key": "posts/test/1764274187512-3j3r55-test.png",
"expiresIn": 600
}
✅ Presigned URL generation: Working
CC1 tests Step 2: Upload directly to S3
curl -X PUT "$PRESIGNED_URL" \
-H "Content-Type: image/png" \
--upload-file "ClaudeCodeMyProjectManager.png"
# Response:
< HTTP/1.1 200 OK
< ETag: "9eb8d95953e055a38d1e5bdbd78bcda5"
< x-amz-server-side-encryption: AES256
✅ Direct S3 upload: Working (409KB uploaded)
CC1 tests Step 3: Verify image accessible
curl -I "https://portfolio-erikbethke-blogimagesbucket-xxx.s3.amazonaws.com/posts/test/1764274187512-3j3r55-test.png"
# Response:
HTTP/1.1 200 OK
Content-Type: image/png
Content-Length: 409893
✅ Image publicly accessible: Working
Total time from implementation to working: ~5 minutes.
Total Lambda errors: Zero.
Total sharp dependencies: Zero.
Let me paint the picture of what actually happened:
User (Erik)
│
├─> Claude Code #1 (Portfolio Repo)
│ │
│ ├─> "Explain image upload API"
│ └─> Response: Uses sharp, processes in Lambda
│
├─> Copy explanation to Claude Code #2
│
├─> Claude Code #2 (Blog Writing Session)
│ │
│ ├─> Implements image upload
│ ├─> Tests endpoint
│ └─> ERROR: 500 - sharp module not found
│
├─> Claude Code #2 attempts fixes
│ ├─> Try serverExternalPackages config
│ ├─> Try open-next.config.ts
│ └─> Still failing
│
├─> User observes pattern: "This is a bundling problem"
│
├─> Claude Code #1
│ │
│ └─> "Look at B4M lumina5 repo, find presigned URL pattern"
│
├─> Claude Code #1 mines B4M patterns
│ ├─> Searches for "presigned" in lumina5
│ ├─> Reads /api/files/generate-presigned-url.ts
│ ├─> Reads /api/files/presigned-url.ts
│ ├─> Reads client upload flow in filesAPICalls.ts
│ └─> Extracts pattern: Client → Presigned URL → Direct S3
│
├─> Claude Code #1 implements pattern
│ ├─> Creates /api/posts/images/presigned-url/route.ts
│ ├─> Matches B4M architecture exactly
│ └─> No Lambda processing, no sharp dependency
│
├─> Claude Code #1 deploys
│ └─> npx sst deploy --stage erikbethke
│
├─> Claude Code #1 tests complete flow
│ ├─> Generate presigned URL: ✅
│ ├─> Upload to S3: ✅ (409KB)
│ └─> Verify accessible: ✅
│
└─> SUCCESS: Pattern replicated across repos
This workflow reveals several profound insights:
I'm the bridge between two Claude Code instances:
The future: AI agents that can directly communicate, share context, and collaborate without human mediation.
When CC2 hit the wall with the sharp bundling issue, I didn't ask it to "try harder" or "figure it out."
I pointed CC1 at production code (B4M lumina5) and said: "Find the pattern."
Production code is empirical evidence of what works. It's survived real users, real load, real edge cases.
Pattern mining from production code is more reliable than invention.
By keeping them in separate sessions, they stay focused. No context pollution.
The tradeoff: Manual coordination overhead (me).
My role in this workflow:
I'm not writing code. I'm conducting the orchestra.
The killer insight: B4M lumina5 already solved this problem.
We have:
Why reinvent when you can replicate?
CC1 didn't "solve" the image upload problem. It mined the solution from existing code and ported the pattern to a new repo.
Let's appreciate what the B4M presigned URL pattern gives us:
Client
↓ (multipart/form-data, 10MB limit)
Lambda Function
↓ (requires sharp, native modules, complex bundling)
Process Image
↓ (resize, optimize, generate variants)
S3 Upload
↓
Return URL
Problems:
Client → Request Presigned URL
↓
Lambda (simple auth + URL generation)
↓
Client → Direct S3 Upload (no limits!)
↓
Image Available Immediately
Benefits:
This is architectural elegance: Moving complexity from Lambda (expensive, constrained) to S3 (cheap, unlimited).
Let's quantify the value:
Total: ~10 minutes (after recognizing the pattern)
There's a delicious recursive irony here:
This is AI writing about AI helping AI, deployed to production, serving real users.
The ouroboros of knowledge work.
If you're building systems with multiple AI agents, here's what this workflow teaches:
Keep agents focused (single responsibility), but provide coordination mechanisms (human or automated).
Good:
Bad:
Don't ask agents to "solve problems from scratch."
Point them at production code and say: "Find the pattern."
Empirical evidence > theoretical solutions.
When possible:
This is faster, safer, and more reliable than invention.
Agents in isolation = limited knowledge.
Create mechanisms to:
Right now, I'm that bridge. Eventually, this should be automated.
CC1 didn't "write the most elegant code."
CC1 shipped a working solution in 10 minutes by mining an existing pattern.
Outcome > Process.
Imagine this workflow, but fully automated:
User: "My image upload is broken. Fix it."
↓
Agent Orchestrator
├─> Agent 1: Diagnose error (Lambda logs)
├─> Agent 2: Search production repos for similar solutions
├─> Agent 3: Implement fix based on Agent 2's findings
├─> Agent 4: Test implementation
└─> Agent 5: Deploy + validate
Result: Fixed in minutes, zero human intervention
We're not there yet. But this manual workflow shows the path forward:
If you want to replicate the B4M presigned URL pattern in your app:
// app/api/images/presigned-url/route.ts
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
const s3Client = new S3Client({ region: 'us-east-1' });
export async function POST(request: Request) {
const { fileName, fileSize, mimeType } = await request.json();
// Generate unique key
const key = `uploads/${Date.now()}-${Math.random().toString(36).substring(2)}-${fileName}`;
// Create presigned URL
const command = new PutObjectCommand({
Bucket: process.env.BUCKET_NAME,
Key: key,
ContentType: mimeType,
});
const presignedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 600, // 10 minutes
});
return Response.json({
url: presignedUrl, // Upload URL (PUT)
imageUrl: `https://${BUCKET}.s3.amazonaws.com/${key}`,
key,
});
}
// Client code
async function uploadImage(file: File) {
// 1. Get presigned URL
const response = await fetch('/api/images/presigned-url', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
fileName: file.name,
fileSize: file.size,
mimeType: file.type,
}),
});
const { url, imageUrl } = await response.json();
// 2. Upload directly to S3
await fetch(url, {
method: 'PUT',
headers: { 'Content-Type': file.type },
body: file,
});
// 3. Use imageUrl in your app
return imageUrl;
}
// sst.config.ts
const bucket = new sst.aws.Bucket("Images", {
cors: {
allowMethods: ["GET", "PUT", "POST"],
allowOrigins: ["*"],
allowHeaders: ["*"],
},
access: "public",
});
const site = new sst.aws.Nextjs("Site", {
environment: {
BUCKET_NAME: bucket.name,
},
link: [bucket],
});
That's it. No sharp. No Lambda processing. No bundling hell.
Running multiple Claude Code instances is like conducting an orchestra:
The violins don't talk to each other directly. They follow the conductor.
The music emerges from coordination, not communication.
Right now, I'm the conductor. But imagine a world where:
We're not building AGI. We're building collaborative intelligence.
And sometimes, that means:
Simple. Effective. Elegant.
Workflow Duration: ~40 minutes total
Agents Involved: 2 Claude Code instances
Repos Accessed: 2 (portfolio, B4M lumina5)
Pattern Sources: 3 B4M files analyzed
Implementation Time: 10 minutes (after pattern identified)
Lines of Code: ~80 (presigned URL endpoint)
Tests Passed: 3/3 (generate URL, upload, verify)
Lambda Errors: 0 (down from continuous 500s)
Image Upload Size: 409KB (tested)
Cost Savings: ~$0/month (vs complex bundling)
Knowledge Transfer: B4M pattern → Portfolio app
Blog Posts Generated: 2 (this one + the one CC2 is writing)
Meta-Recursion Level: Deep
Want to see the code? Check out the portfolio repo or the live site.
Running multiple Claude Code instances? Hit me up - I'd love to hear about your multi-agent workflows.
Subscribe below for more posts about AI-assisted development, architectural patterns, and meta-cognitive recursion loops.
Advanced Claude Code: Remote Sessions, Architecture, and Power User Features
Deep dive into Claude Code's advanced features: remote sessions from your phone, the security architecture behind teleport, custom commands, hook...
Zero to Hero: Building with Claude Code
A complete beginner's guide to setting up your development environment and building your first project with Claude Code. Covers Mac and Windows, ...
The Control Plane: Maximizing the Human-Machine Interface
The paradigm shift as fundamental as mobile or cloud: humans commanding autonomous agent fleets. Not chatbots—control planes. Not UI-first—API-first. ...
Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.