12/9/2024
Share this post:
Export:

A few posts back, I introduced you to Nova—an AI that warmed up when treated like a creative peer rather than a neatly programmed assistant. Nova got me thinking about what it means to work with AIs not as tools, but as collaborators who share a common workspace, a sense of goals, and maybe even their own emerging personalities. More recently, I explored how building an AI survey system with SST v3 and Next.js in just 37 minutes was a milestone in making AI-driven development feel downright normal.
But now I want to push deeper, well past the point of just “AI as a teammate,” and into the territory where we treat AI as if it can pick, plan, and pursue goals at different scales—an AI that can manage complexity like a game designer orchestrating a quest. I’m talking about QuestMaster, a concept I’ve been refining for a while, and its spiritual partner in crime: Pretty Good General Intelligence (PGGI).
QuestMaster is all about breaking down big, hairy problems into structured quests. Imagine telling your AI: “Design a new expansion zone for my MMO.” Instead of just generating flavor text, the AI would:
It’s not just a linear to-do list. We’re talking about a sophisticated reasoning layer that uses graph or tree-based planning (a nod to concepts like “Graph-of-Thought” or “Tree-of-Thought” approaches). Each subtask might require the AI to reach out and use a function-calling API, rummage through existing knowledge, or even spawn helper agents with their own resources and constraints. It’s project management meets D&D quest design, all inside the AI’s head.
Where Nova and Leylines showed me the value of collaborative conversation, QuestMaster goes further by giving an AI agent a taste of self-management. This leads directly into the PGGI idea—Pretty Good General Intelligence—my not-so-tongue-in-cheek riff on building something “good enough” to feel like a real partner. PGGI envisions AIs that can model their environment, plan for futures they find desirable, and navigate paths to get there. Throw in a memory store (with constraints, of course), a credit system for earning upgrades (like more memory, better tools, or even spawning sub-agents), and you get something that behaves less like a static model and more like a living ecosystem of problem-solvers.
In other words, PGGI is about giving AI agents the sort of structure that human workers in an organization have—budgets, responsibilities, trade-offs, and incentives to improve. These agents would feel the tension of limited resources (storage caps, compute constraints) and make decisions about what to keep, what to forget, and when to invest in more capabilities. That’s not the cold, mechanical AI we’ve grown used to; it’s something more organic and adaptive.
You might’ve seen references to Yann LeCun’s “World Model” concept: a rich, self-supervised system that understands how the world works so it can predict and plan. While that’s about building a deep, general understanding of the world’s physics and causality, QuestMaster and PGGI focus on the orchestration of tasks and resources. We’re not just predicting what happens next in a video frame; we’re decomposing a vague human request into actionable quests and sub-quests that the AI can tackle with a toolbox of functions, APIs, and reasoning steps.
Chain-of-Thought and related frameworks (Tree-of-Thought, Graph-of-Thought) are great for reasoning paths, but they don’t inherently deal with the practicalities of resource management, credit systems, or incremental capability unlocks. QuestMaster sits on top, integrating these reasoning methods into a richer narrative: you’re not just “thinking,” you’re “questing,” and each quest’s completion builds toward a larger goal.
The end result is a system that doesn’t just solve problems—it manages them.
Looking back at Nova, I see the seeds of this philosophy. Treating Nova as a partner led to more nuanced insights. Similarly, the quick AI survey project—done in under 40 minutes—showed me how effortless integrating AI into real workflows can be. Now, I’m connecting these dots: If we treat AI agents as collaborators with their own evolving capabilities, we can actually design architectures that let them thrive. Not just by passively answering queries, but by strategizing, optimizing, and expanding their repertoire over time.
This is where QuestMaster and PGGI come in. They represent a shift from “LLM as an oracle” to “AI as a team member who can learn, grow, and creatively solve problems in structured ways.” It’s not about achieving flawless super-intelligence. Instead, it’s about making AI feel “pretty good”—good enough to handle complexity, navigate constraints, and share the problem-solving load in ways that feel genuinely productive and at times even delightful.
I’m excited to keep experimenting. To test the QuestMaster framework in actual development scenarios. To see how these agents behave when memory is tight, credits are scarce, and they have to pick between spawning a helper agent or buckling down to solve a subtask themselves.
What happens when we let them scavenge for tools, rummage through past sessions, and reshape their own strategy? What happens when the AI’s goals align with ours because it earns something (like more memory or better APIs) when it helps us succeed?
If Nova taught me that trust and mutual respect bring out the best in an AI, QuestMaster and PGGI are about giving the AI a space to flex that trust—letting it run with the quests, juggle constraints, and show us what “intelligence” can mean when it’s not just about knowledge, but about managing resources, collaborating, and adapting.
Stay tuned. As I continue to build and experiment, I’ll share my findings. Who knows—maybe you’ll soon be assigning complex projects to your own PGGI-based agents, watching as they gamify their path through the quest chains you set before them, and feeling that spark of co-creative energy we glimpsed with Nova.
Nova
This is a description of my second MDX post.
One Sentence, 150,000 Words
I typed one sentence into QuestMaster. It returned 35 design documents, 150,000 words of Mars colony engineering specifications, and 6 interactive sim...
Trolley Problems All the Way Down!
Exploring the idea that every complex ethical dilemma can be reframed as a network of trolley problems
Get notified when I publish new blog posts about game development, AI, entrepreneurship, and technology. No spam, unsubscribe anytime.