Fresh stories
OpenClaw 2026.4.26 adds Google Live Talk, openclaw migrate, and Matrix E2EE
OpenClaw 2026.4.26 shipped Google Live Talk, local-model fixes, openclaw migrate imports for Claude and Hermes, and one-command Matrix E2EE. It also hardens plugins, Docker, and transcript compaction for self-hosted agent runs.

Symphony launches Codex orchestration for Linear and GitHub issue queues
OpenAI released Symphony, an orchestration layer that turns issue trackers into Codex agent queues for PR generation and review. Early users say it can move many tickets in parallel, but token burn rises quickly when agents fan out.

mattpocock/skills ranks #1 on GitHub at 28K stars with `/grill-me` and `/tdd` packs
mattpocock/skills hit the top of GitHub Trending as reusable `SKILL.md` packs for grilling specs, writing PRDs, and enforcing TDD spread across coding-agent workflows. The format is starting to look like a distribution layer for agent behavior, with faster install tooling and third-party skills shipping around the same pattern.


OpenClaw 2026.4.26 adds Google Live Talk, openclaw migrate, and Matrix E2EE
OpenClaw 2026.4.26 shipped Google Live Talk, local-model fixes, openclaw migrate imports for Claude and Hermes, and one-command Matrix E2EE. It also hardens plugins, Docker, and transcript compaction for self-hosted agent runs.

Bedrock adds OpenAI models and stateful runtime in coming weeks
AWS says OpenAI models will land on Bedrock in coming weeks alongside a new stateful runtime. OpenAI also said its Microsoft partnership is now non-exclusive, which opens a multi-cloud path for deployment and procurement.

Claude Code 2.1.121 adds MCP alwaysLoad, plugin prune, and fixes multi-GB leaks
Claude Code 2.1.121 shipped MCP alwaysLoad, plugin prune, skills search, and multiple multi-GB memory leak fixes. It also changes Bash and system-prompt behavior, which can alter existing harness and tool assumptions.

Symphony launches Codex orchestration for Linear and GitHub issue queues
OpenAI released Symphony, an orchestration layer that turns issue trackers into Codex agent queues for PR generation and review. Early users say it can move many tickets in parallel, but token burn rises quickly when agents fan out.
MiMo-V2.5 opens under MIT with 1M context and SGLang vLLM support
vLLM 0.20.0 releases TurboQuant 2-bit KV cache, CUDA 13 baseline, and DeepSeek V4 upgrades
mattpocock/skills ranks #1 on GitHub at 28K stars with `/grill-me` and `/tdd` packs
Codex raises paid-plan limits after GPT-5.5 shipping week

Portless v0.11 adds myapp.localhost and parallel worktrees

Devin launches Devin for Terminal with `/handoff` cloud sessions and frontier-model switching

Droids launches Automated QA with /install-qa, browser flows, and PR reports

Browser Use launches Browser Use Box with persistent logins and Telegram control

GitHub Copilot introduces usage-based billing on June 1, 2026
Top storiesthis week
DeepSeek cuts input cache-hit price 90% to $0.003625 per 1M tokens
DeepSeek said cache-hit pricing across its API series is now one-tenth of launch levels, on top of the temporary V4-Pro discount through May 5. The cut lowers costs for cache-heavy long-context and agent workloads, so teams should recheck spend assumptions.


Anthropic fixes Claude Code harness bug tied to `HERMES.md` and `git status`
Anthropic said a third-party harness detection bug pulled `git status` into Claude Code prompts, and it is refunding affected users with extra credits. Watch for hidden client logic that can change spend and behavior in real agent workflows.

Users report GPT-5.5 speeds up coding and cuts over-editing in low-reasoning runs
New evals and day-three user tests show GPT-5.5 performing well at low or medium reasoning, with benchmark gains over GPT-5.4 in coding-heavy use. That matters because stronger results no longer require xhigh runs, though some users still flag sycophancy.

DeepSeek V4 supports Anthropic-compatible routing into Claude Code and Cowork for ~90% lower cost
Independent guides showed DeepSeek V4 running inside Claude Cowork and Claude Code via Anthropic-compatible endpoints, and Ollama added launch commands for Claude-style wrappers. The workflow matters because teams can keep Claude-centered agent UX while sharply lowering model spend, with provider compatibility and setup still the main caveats.

Pi ecosystem ships computer use, `/parallel-review`, and Chrome extension templates
Independent builders shipped Pi-GUI computer use, pi-subagents parallel review, and starter templates for extensions, Docker workers, and voice add-ons. The releases add reusable computer-use, subagent, and local-runtime building blocks around the base Pi harness.







