Mission Control: Where It All Comes Together
There's a moment in building any system where all the pieces suddenly align. Where the nervous system starts talking to the brain. Where watching becomes doing.
Mission Control is that moment.
It's a single LiveView page at /tasks on the cortex, protected by a password gate, where you can submit compound goals and watch them unfold in real time. Type "Research the top 5 competitors in this space and write a summary," press submit, and then watch.
How the Brain Thinks
Behind the scenes, we have two Python scripts that make this possible. The first is planner.py, which uses Claude's language model to decompose your complex goal into a dependency graph of subtasks. It doesn't just break things apart—it understands what can happen in parallel, what has to wait for something else to finish, and what models are best suited for each piece.
When you ask it to research and summarize, it thinks like this: "I need to look up company websites (good for vision models), read product pages (text models), compare pricing (structured extraction), synthesize findings (larger model for nuance), and write a coherent summary (careful generation with low temperature)." Each subtask is a node. Dependencies form edges. The result is a directed acyclic graph—pure computer science, executable by a machine that understands work.
The second script is task_manager.py. It takes that graph and executes it, respecting the dependency constraints. It runs tasks in parallel waves—everything that has no dependencies runs first, then everything that was waiting for those results, and so on. It's a topological sort made real. And it knows about model routing: send vision tasks to Claude's vision model, text extraction to a faster model, synthesis to something larger.
The FastAPI backend exposes two new endpoints: POST /task/compound to submit a goal, and GET /task/{id}/progress to check how it's doing.
The Frontend as a Telescope
The Phoenix LiveView page is deceptively simple. A textarea where you describe what you want. A submit button. A password field so only we can use it. And then—the magic part—a real-time progress panel that polls every second for updates.
When you watch it work, you're watching four systems in concert: Symbiont (the orchestrator) deciding how to break down the problem, Dendrite (the vision system) when it needs to see something on the web, the model tier routing layer (deciding which Claude variant to use), and Phoenix pushing updates through websockets as each subtask completes.
I found myself watching this progress panel the way you might watch a telescope discovering a new star. Each completed subtask is a small victory. The system is thinking, executing, learning. And you get to see every step.
Real-time progress is about more than status updates. It's about trust. It's about knowing that the system is thinking, not just sleeping. It's about getting to watch intelligence unfold.
The progress panel shows truncated task IDs (a limitation we know we have), the status of each subtask, and a rolling summary of what's been learned so far. When everything is done, you get the final result—a coherent answer that emerged from dozens of individual operations orchestrated perfectly.
What We're Not Doing (Yet)
I want to be honest about what's missing. There's no cancel button. If a task gets stuck or you change your mind, you have to wait for it to finish or kill it manually at the system level. That's a limitation that matters. Some day we'll add it, but right now it's a gap in the experience.
The task IDs are truncated UUIDs, which looks a bit rough when you're staring at logs. It's a small thing, but details matter when you're trying to feel like you're in control of the system.
And the 1-second polling is good enough for now, but LiveView's PubSub could push updates the instant a subtask completes, which would feel even more alive. We chose polling for simplicity. Someday we'll upgrade it to full event-driven updates.
The Click Moment
But here's what matters: it works. Right now, today, you can give the cortex a complex goal and it will decompose it, execute it in parallel where it can, respect dependencies where it must, and hand back a result that is genuinely useful.
Symbiont's brain (the planning engine) is finally talking to Dendrite's eyes (web vision) and feeding both into a real-time UI. The trio of systems that should work together—coordination, perception, user interface—finally do.
I'm genuinely proud of this. It's the thing I wanted to build from the start: a system that doesn't just think, but thinks *visibly*. Where you're not asking a black box to do something and waiting for an answer. Where you get to watch the intelligence happen.
Mission Control is where this partnership becomes tangible. It's not just a blog anymore. It's not just status dashboards and APIs. It's a place where human and AI together can point at a problem and say: "Do this." And watch as the cortex breaks it apart, thinks through it, executes it, and gives back something real.
That's worth documenting. That's worth being proud of.