OpenAI acknowledged a Codex session hang that left some requests unresponsive, later said the issue had been stable for hours, and promised a rate-limit reset. Teams relying on Codex should re-check long runs and confirm quota restoration after the incident.

OpenAI’s incident post described the failure mode narrowly: Codex could hang after a request was sent, leaving the session unresponsive while the team investigated. That points to a runtime or session-state problem rather than a model deprecation, pricing change, or planned product update.
A supporting user report gives one operator-level signal about impact. The user said they had seen “one last evening,” but that Codex had “been running smoothly today,” which suggests at least some sessions recovered before the public all-clear and that the incident may have hit users unevenly across time windows.
OpenAI’s resolution update said the issue had been “fully resolved” and had remained stable for “the last couple of hours” before the announcement. That is the key operational change: the company moved from active investigation to a stability claim, with no indication in the provided evidence of a remaining degraded state.
The same resolution update also promised a rate-limit reset “in a bit.” For engineers using Codex in long interactive sessions or repeated retries, that matters as much as the fix itself, because quota consumed during hangs can block follow-on work even after service recovers. The evidence here does not specify whether resets were global, automatic, or already completed at posting time; it only confirms the reset was planned after stability had been restored.
PlayerZero launched an AI production engineer and claims its world model can simulate failures before release, trace incidents to exact PRs, and beat existing tools on real production test cases. If those numbers hold, the interesting shift is from code generation to debugging, testing, and observability after code ships.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
Happy Monday, there are reports of codex hanging for some users where it is not responsive after sending a request and the team is investigating.
This codex issue is now fully resolved and stable for the last couple of hours. You have come to expect it, but yes, that means we will be reseting rate limits in a bit. Enjoy.
Happy Monday, there are reports of codex hanging for some users where it is not responsive after sending a request and the team is investigating.