Skip to content
AI Primer

Explore what's new in AI

Where people deep in AI come to stay current.

Filters

Category

Tags

Release

Codex opens mobile preview in ChatGPT for iOS and Android remote control

OpenAI rolled out Codex in the ChatGPT mobile app, letting users start work, review outputs, approve steps, and steer remote sessions from iPhone or Android. The preview keeps execution on a laptop, Mac mini, devbox, or SSH target while syncing screenshots, diffs, and terminal state back to mobile.

Codex opens mobile preview in ChatGPT for iOS and Android remote control
New
Codex·14th May·7 min read
Breaking

Zyphra releases ZAYA1-8B-Diffusion-Preview on AMD with 4.6x-7.7x faster decoding

Zyphra released ZAYA1-8B-Diffusion-Preview, its first diffusion language model trained on AMD, and said 16-token block generation delivers 4.6x-7.7x faster decoding with limited quality loss. The design targets autoregressive KV-cache bottlenecks while keeping post-training and test-time compute viable.

Zyphra releases ZAYA1-8B-Diffusion-Preview on AMD with 4.6x-7.7x faster decoding
New
Inference Optimization·14th May·3 min read
Release

Claude Code 2.1.142 adds `claude agents` flags and fixes macOS sleep reconnects

Claude Code 2.1.142 added new background-session flags for directories, permissions, model, effort, and MCP or plugin config while switching Grep to ripgrep by default. The release also fixes remote MCP timeouts and daemon reconnect failures after macOS sleep.

Claude Code 2.1.142 adds `claude agents` flags and fixes macOS sleep reconnects
New
Claude Code·14th May·3 min read
See all stories →
🤖Agentic Engineering(24)
🧩Agent Development(5)
Inference & Infrastructure(8)
🔒Security & Reliability(3)
💰Cost & Operations(1)
🔬Research & Benchmarks(3)
📊Business & Policy(2)
📌Other(1)

Top storiesthis week

Breaking

Nous Research releases TST with 2-3x pretraining speedup at matched FLOPs

Nous Research introduced Token Superposition Training, which bags tokens early in pretraining before returning to next-token prediction. The team says TST cuts wall-clock training 2-3x at matched FLOPs while leaving the deployed model unchanged.

Nous Research releases TST with 2-3x pretraining speedup at matched FLOPs
New
Benchmarks·13th May·4 min read
See all stories →
AI PrimerAI Primer

Your daily guide to AI tools, workflows, and creative inspiration.

© 2026 AI Primer. All rights reserved.