Claude Code added /btw, a side-channel prompt that can inspect the current session without interrupting the main task. Use it to ask clarifying questions mid-run without polluting history or triggering extra tool work.

/btw command to Claude Code that opens a side-channel prompt while the main coding task keeps running, according to the launch post./btw as a way to ask "a quick question about the current session" without interrupting the main task, with the follow-up adding that it is read-only and does not persist in history./btwClaude Code now supports /btw for "side chain conversations while Claude is working," as Anthropic described in the announcement. The attached demo Side question demo shows the main pane continuing to generate code while a separate side exchange stays visible, which makes this a workflow feature for in-flight tasks rather than a new model mode.
Anthropic's interactive mode docs place the feature inside Claude Code's existing terminal workflow. The practical change is that users can inspect or clarify the current session mid-run instead of waiting for the active task to finish or breaking the thread with a new top-level prompt.
The implementation is narrow by design. In the technical thread, Anthropic says /btw "cannot do any tool calls" and returns only "a single turn of output," but it still has access to the full session context. That means it is better understood as a read-only inspection path than as a second concurrent agent.
Early users are already describing the behavior in the same terms. One follow-up post says /btw is "read-only," has "no tool access," and "doesn't get added to conversation history"; dismissing it removes it entirely. That combination matters for engineers because it reduces two common side effects of mid-task prompting: spawning extra tool work and polluting the main transcript with transient questions.
The result is small but concrete workflow polish. As the practitioner summary put it, you can ask a quick question about the current session "without interrupting the main task," which is exactly the kind of interaction detail that makes coding agents easier to keep in flow during longer runs.
ChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
We just added /btw to Claude Code! Use it to have side chain conversations while Claude is working.
You can ask a quick question about the current session without interrupting the main task. A small feature, but exactly the kind of workflow polish that makes agentic tools feel better to use.
We just added /btw to Claude Code! Use it to have side chain conversations while Claude is working.
a bit more on the technical details- this cannot do any tool calls and is a single turn of output but has the entire context of your conversation s/o to @ErikSchluntz for building this as a side project read more in our docs here: code.claude.com/docs/en/intera…