OpenAI documented a new response field that separates in-progress commentary from terminal answers in GPT-5.4 turns, with guidance for replaying those messages in follow-up calls. Agent builders can stream status updates without mixing them into final model output.

phase field for GPT-5.4 responses, splitting assistant messages into in-progress commentary and terminal final_answer states so agents can surface status updates without treating them as the turn’s final output phase docs.OpenAI’s new GPT-5.4 integration note shows assistant outputs can now carry a phase value that distinguishes “preamble / still working” from the terminal answer. The example in the developer thread returns one assistant message with phase: "commentary" and a later one with phase: "final_answer", giving agent builders a clean way to stream progress text such as “Checking the news now...” before the actual result lands.
The important implementation detail is state replay. In the same thread, OpenAI says that if you are building your own agent, “it’s important” to pass that parameter back on subsequent turns, and the linked integration notes show a follow-up request that reuses the prior assistant output before sending a new user instruction. That implies phase is part of the model-visible conversation record, not just UI metadata.
The new phase field fits a broader GPT-5.4 pattern: the model can expose more of its in-flight work without collapsing everything into a single final blob. In Daniel Mac’s demo, GPT-5.4 accepts a new user message that changes its reasoning path mid-task, which he describes as “a brand new AI model capability.” His theory of a “stateful orchestration loop” is still interpretation, but the observed behavior matches OpenAI’s decision to formalize non-final assistant messages.
That matters for agent UX and control surfaces. A system can now show commentary updates while tools are running, keep those messages structurally distinct from the answer, and continue the thread with the same phased messages preserved. OpenAI’s ChatGPT model doc also separates GPT-5.4 Thinking from faster modes and notes model-specific limits, reinforcing that GPT-5.4 is being exposed as a longer-horizon reasoning path rather than just another stateless chat variant.
Claude can now drive macOS apps, browser tabs, the keyboard, and the mouse from Claude Cowork and Claude Code, with permission prompts when it needs direct screen access. That makes legacy desktop workflows automatable, and Anthropic is pairing the push with more background-task support for longer agent loops.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
GPT-5.4 can communicate back to the user while it's working on longer tasks! We introduced a new "phase" parameter for this to help you identify whether this message is a final response to the user or a "commentary". People have enjoyed these updates in Codex and you can have Show more
GPT-5.4 allows you to send a message to change the reasoning trajectory mid-reasoning. Afaik, this is a brand new AI model capability that no previous model provides. Theory for how it works: stateful orchestration loop, where the model does work in chunks and holds state. Show more