OpenAI published runtime details for the Responses API computer environment, including shell loops, capped output, automatic compaction, proxied outbound traffic, and reusable skills folders. Use it as a reference architecture for hosted agents that need state, safety controls, and tool execution patterns.

This is less a new model feature than a reference architecture for hosted agents. In OpenAI's writeup, the Responses API runtime is described as a managed computer environment where the model operates in a loop: propose a command, run it, inspect the result, and decide the next action. The thread in Rohan Paul's summary says the interface is built around a shell tool, which gives the model access to standard command-line utilities inside the hosted workspace.
The practical point is state and execution. Instead of forcing everything through one prompt, the runtime container can store intermediate files and work with structured data stores such as SQLite, which the thread frames as a better fit than making the model read “massive raw spreadsheets.”
OpenAI's design notes focus on two operational problems: context bloat and risky execution. According to the report summary, terminal output is capped so the system keeps only the start and end of very long logs, and older conversation history is automatically compacted into a smaller summary that preserves key details. The thread calls this “compaction” a way to keep long-running jobs from exhausting the model's memory budget thread details.
For safety, outbound network access is proxied rather than left open-ended. The same thread summary says the proxy masks real credentials and substitutes placeholder secrets, which matters for agents touching external services. OpenAI also describes reusable “skills” folders for repetitive workflows, so common procedures can be bundled once instead of being relearned in every run report summary.
Claude can now drive macOS apps, browser tabs, the keyboard, and the mouse from Claude Cowork and Claude Code, with permission prompts when it needs direct screen access. That makes legacy desktop workflows automatable, and Anthropic is pairing the push with more background-task support for longer agent loops.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
OpenAI published how their Responses API works by putting agents into a secure and managed computer space. OpenAI wrote this report to explain how they give language models a hosted workspace to execute complex software workflows. The core idea is an agent loop where the modelShow more