How we built an AI skill that turns a CPO's HTML prototype into an 820-line implementation spec in 20 minutes — and why reading the source matters more than clicking the buttons.
Share this post:
Export:
TL;DR — You do not need to read this article. Copy it. Hit the markdown button at the top of this page, select all, copy, open Claude Code in your terminal, and paste the whole thing in. Claude will read it, build the skill, and you will be analyzing prototypes in five minutes. This article exists so you understand what it does and why. But the fastest path is: copy, paste, go.
When your CPO sends you a prototype URL, what do you do with it?
If you are like most engineering teams, you open it in a browser, click around for ten minutes, jot some notes in a doc, and then spend the next three meetings arguing about what the prototype actually specifies. Half the team saw the modal that pops up when you click the vendor card. The other half missed it entirely. Nobody noticed the hardcoded data model buried in the JavaScript that defines the exact database schema the CPO had in mind.
We built a tool that changes this. In about 20 minutes, it produces an 820-line implementation-ready specification document from a single prototype URL. Here is how it works and what we learned.
Our CPO, Deepak Surana, builds prototypes on Netlify. They are sophisticated single-page applications — the first one we analyzed was 14,789 lines of HTML, CSS, and JavaScript in a single file. Twelve navigable views. Hidden modals triggered by onclick handlers. Canned AI responses with keyword routing. ROI calculators with real math. A gamification system tracking an "Insight Score" from 30 to 100.
A human clicking through that prototype in a browser captures maybe 60% of what is there. The other 40% — the hidden modals, the hardcoded data models, the calculation formulas, the simulated AI responses — lives in the source code where no one thinks to look.
This is the iceberg problem. The visible surface of a prototype is impressive. The invisible depth is where the actual specification lives.
We built what we call the "Explore Prototype" skill for Claude Code. It is a structured set of instructions that tells Claude how to systematically analyze any prototype URL using Playwright (headless Chromium) and produce a comprehensive specification document.
The skill runs in seven steps:
Playwright loads the page and discovers every navigable target — links, buttons, data-page attributes, onclick handlers. For Deepak's prototype, this revealed 12 distinct views organized into Coverage (Dashboard, Your Vendors, 4 Practice Areas) and Tools (Evaluations, Competitive Intel, ROI Calculator, Sensitivity Analysis, Scenario Manager) plus an Onboarding flow.
For each navigable view, the skill clicks into it, waits for content to render, takes both viewport and full-page screenshots, extracts all text content, and catalogs every UI component — cards, forms, charts, tables, modals, buttons, toggles.
This is where most prototype analysis stops. And this is where ours gets interesting.
This is the step that captures the other 40%. The skill downloads the full HTML source and reads it as code, not as a rendered page. It searches for:
Hardcoded data models. When Deepak puts { name: 'AWS', score: 88.8, tier: 'ELITE' } in his JavaScript, that is not placeholder data. That is the database schema he envisions. The skill extracts every entity — eight vendors with composite scores across six dimensions, market sizing data, survey respondent breakdowns, analyst profiles.
Hidden modals. The skill searches for every onclick handler that matches patterns like openModal, showModal, or toggle. Then it executes each one via Playwright, screenshots the result, extracts the content, and closes it. Deepak's prototype had several modals that were invisible from normal navigation.
Simulated AI responses. The prototype had an "Ask Futurum AI" feature with canned responses routed by keywords. The skill finds the response mapping table in the JavaScript, documents every keyword-to-response route, and captures the full response text. These canned responses define the quality bar — they show exactly what Deepak expects the real AI to produce.
Calculation logic. The ROI Calculator was not just a form mockup. It had real formulas — Annual Contract Value times Contract Term, plus Implementation Cost, plus FTE costs with maintenance percentage. The Sensitivity Analysis showed what happens to a vendor's score when you shift each dimension weight by plus or minus 20%. The skill extracts every formula, threshold, default value, and visualization logic.
CSS design tokens. Every CSS custom property from :root, including dark mode overrides. Font families, tier color mappings, spacing values. This becomes the design system spec that front-end developers actually need.
Gamification mechanics. The Insight Score system: +20 for completing your profile, +15 for selecting practice areas, +15 for your first AI query, then post-onboarding tasks worth +15, +10, +10, and +5 to reach 100. Blur gates on content. Progressive disclosure triggers. The skill finds every score, unlock, blur, and contrib reference in the source.
Every screenshot gets a descriptive filename — 00_homepage.png, 03_pa-ai-platforms.png, 07_evaluations.png, modal_vendor_detail.png. These become the visual reference library for the engineering team.
The skill generates a structured markdown document covering:
For each page, the skill categorizes every feature into three buckets:
This is where the spec becomes actionable. Instead of "build what Deepak showed us," engineers get "here are the 14 things we need to build, here are the 8 things we can extend, and here are the 6 things we already have."
Surface-level Playwright exploration — clicking through pages, taking screenshots, reading visible text — captures roughly 85% of a prototype. That sounds pretty good until you realize the missing 15% contains:
That 15% is not edge-case detail. It is the specification. The visible 85% is the demo. The invisible 15% is the product.
After discovering this gap on our first run, we added Step 3 (Deep Behavior Extraction) to the skill. Every subsequent prototype analysis now goes source-deep by default.
From Deepak's Futurum Trial Homepage prototype:
The spec became the foundation for a four-bucket implementation plan:
We went from "Deepak sent a prototype" to "here is a feature-flagged dark theme running in the actual app" in under 48 hours. The spec made that possible because engineers did not have to reverse-engineer intent from a clickable mockup.
Every product team has some version of this problem. Designers and product leaders create prototypes — in Figma, in Framer, in raw HTML, in Netlify deploys. Engineers receive these prototypes and have to extract specifications from them. The translation from "interactive demo" to "buildable spec" is where projects stall, scope creeps, and intent gets lost.
The insight is that prototypes are not just visual artifacts. They are codebases. And codebases can be analyzed programmatically.
If your CPO builds prototypes in HTML, you can point a headless browser at them and extract everything — the visible experience and the invisible specification. The gap between "what the prototype shows" and "what the prototype means" can be closed by reading the source, not just clicking the buttons.
You do not need to be an engineer to do this. If you can open a terminal and paste commands, you can have this running in five minutes.
Claude Code — this is Anthropic’s command-line tool for Claude. It is the thing that actually runs the skill. If you do not have it yet:
npm install -g @anthropic-ai/claude-codeclaude once to log in with your Anthropic accountIf you already use Claude Code, you are good. Move on.
A "skill" in Claude Code is just a text file in a specific folder. Think of it as a recipe card that tells Claude exactly how to do a specialized task. Here is how to add it:
Step 1. Create the folder where skills live:
mkdir -p ~/.claude/skills/explore-prototype
Step 2. Create the skill file. Open this path in any text editor:
~/.claude/skills/explore-prototype/SKILL.md
On Mac, you can do this from the terminal:
open -a TextEdit ~/.claude/skills/explore-prototype/SKILL.md
Step 3. Paste the skill definition into that file and save it. The full skill definition is available at the bottom of this post in the appendix. It is a markdown file — just copy the whole thing.
That is it. The skill is installed.
Open Claude Code in your terminal (just type claude) and then type:
/explore-prototype https://your-prototype-url.netlify.app
Replace the URL with whatever prototype you want to analyze. Claude will:
The spec file lands at ~/Desktop/prototype-specs/[site-name]-spec.md. Open it in any text editor or markdown viewer.
npm install -g @anthropic-ai/claude-code.cd /tmp && npm init -y && npm install playwright && npx playwright install chromium manually, then try the skill again.The core pattern is simple: navigate like a user, then read like an engineer. Click every link, trigger every modal, fill every form — then download the source and extract every data model, formula, and response template.
Or honestly? Who are we kidding. Just hit the markdown button at the top of this post, copy the whole thing, and paste it into Claude Code. It will read this article, understand what the skill does, and build it for you. That is the world we live in now.
We are extending the skill to handle multi-page Netlify sites (not just single-file SPAs), Figma prototypes via the Figma API, and Framer exports. The goal is a universal prototype-to-spec pipeline that works regardless of how your product team builds prototypes.
The deeper goal is closing the gap between product vision and engineering execution. Every prototype is a compressed expression of someone's product thinking. The better we can decompress that into structured specifications, the faster we can build what they actually meant — not just what we saw when we clicked around for ten minutes.
The Explore Prototype skill was built for the Polaris project at Futurum Group. Deepak Surana is CPO. The Futurum Trial Homepage prototype is the first of many prototypes we plan to analyze this way.
Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.
Loading comments...
Published: March 12, 2026 6:38 PM
Last updated: March 12, 2026 6:56 PM
Post ID: 27d4904d-b491-40ba-9bca-055c62b443af