Agent Flywheel lays out a planning-first workflow built on beads, agent mail, swarms, and TUI inspection for very large coding runs. It is useful because the guide exposes coordination primitives and review loops, not just benchmark screenshots.

building-glamorous-tuis skill.The new piece in Agent Flywheel's guide is not a single model or benchmark. It is a coordination stack: heavy up-front planning, converting plans into self-contained "beads," polishing those artifacts, then dispatching swarms across tools like Claude, Codex, and Gemini via Agent Mail. The linked writeup describes a self-reinforcing loop where each iteration improves the planning artifacts before more code is generated Flywheel guide.
That makes the story more operational than most "agentic coding" posts. Instead of jumping straight to codegen, the method treats planning docs, dependency graphs, and task packaging as first-class assets. The useful engineering idea is that the workflow tries to scale by improving the inputs to agents, not only the agents themselves.
The follow-up thread turns that abstract flywheel into a reproducible pattern. In the examples thread, doodlestein has Claude Code first read AGENTS.md and README.md, then "fully understand the code" of an existing TUI-heavy project, and finally update a reusable skill so those patterns are "fully embodied" in building-glamorous-tuis.
The key claim is that models can generalize from a "golden exemplar" if the exemplar is concrete enough. The same thread says you can search prior coding-agent sessions with a cass tool, while the skills catalog at Skills.md is presented as the place where those refined workflows get stored and reused. This is less a new SDK than a method for turning successful project-specific work into portable agent instructions.
The most detailed runtime evidence comes from the swarm plan. The screenshots lay out a 10-bead execution plan for an ntm TUI upgrade, including progress bars, a bubbles/table pane list, a Huh-based spawn wizard, scroll indicators, sparklines, spring transitions, animated gradients, six new TUI Inspector profiles, and a final regression pass with go build and go test -short.
A second screenshot in the file reservations shows how the swarm is coordinated: each agent gets exclusive-write files, shared files are tagged for merge safety, and the spawn command launches "5 CC + 5 COD" with staggered starts. The thread explicitly calls this "in-context recursive self-improvement," because the improved TUI skill is then used to upgrade ntm, and ntm itself helps manage the swarm doing the work.
The demo in the ntm HUD post adds the UI layer. Pressing F12 opens an ntm HUD that aggregates Agent Mail, Beads, and tmux-derived context data, while the accompanying video ntm HUD demo shows a monitorable control surface rather than raw tmux panes. The author says "99%+" of current beads_viewer usage is now indirect, with agents using it on behalf of a human, which is the clearest statement of the project's scaling thesis: the dashboard is becoming agent-facing infrastructure, not just operator convenience.
Vercel Emulate added a programmatic API for creating, resetting, and closing local GitHub, Vercel, and Google emulators inside automated tests. That makes deterministic integration tests easier to wire into CI and agent loops without manual setup.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
The complete guide to my Agent Flywheel approach of integrated tooling, workflows, and prompts (or “How I Learned to Stop Worrying and Love Generating 1,000 High-Quality Commits a Day”): agent-flywheel.com/complete-guide
Here's where this approach really turns into the "virtuous circle" that goes beyond simply being a useful technique and morphs into something more like the "in-context recursive self-improvement" that I've been talking about recently: You take the improved skill that you Show more
Here are some more tangible, practical examples of the "virtuous circle" of using skills to improve tooling, and then tooling to improve skills. Huh? How exactly could tooling help improve skills? Well, one way is pretty obvious. For example, you could use my cass tool for
Here are some more tangible, practical examples of the "virtuous circle" of using skills to improve tooling, and then tooling to improve skills. Huh? How exactly could tooling help improve skills? Well, one way is pretty obvious. For example, you could use my cass tool for Show more
Here you can see what this is starting to look like. Instead of just having a big grid of tmux panes, ntm now lets you press F12 to bring up the ntm HUD, which lets you monitor what's happening. I continue to fold in and integrate more data sources into this ntm dashboard, Show more
Here's where this approach really turns into the "virtuous circle" that goes beyond simply being a useful technique and morphs into something more like the "in-context recursive self-improvement" that I've been talking about recently: You take the improved skill that you