An amicus brief from more than 30 OpenAI and Google workers now backs Anthropic's challenge to the Pentagon blacklist. Track the case if you sell into government, because it could affect federal AI procurement policy beyond one vendor dispute.

Anthropic has now turned its Pentagon blacklist fight into a broader industry test. According to the BBC summary, the company filed lawsuits in two separate courts to get the “supply chain risk” label removed and to stop agencies from cutting off access to its products.
The public summaries describe Anthropic’s argument in two parts. First, the case summary says Anthropic claims the designation is a “gross overreach” because the label is normally used for hostile foreign actors, not a domestic AI vendor. Second, the filing recap says Anthropic is arguing the government used the designation to punish the company over its speech and policy positions on AI safety for weapons.
The new detail is not just that Anthropic is suing; it is that rivals’ employees are lining up behind the challenge. The amicus report says more than 30 experts from OpenAI and Google joined the brief, and names Jeff Dean among the signers. The same recap says those supporters argue the blacklisting could damage U.S. AI leadership, which turns the case from a single-vendor dispute into a precedent fight over how federal agencies can exclude model providers.
The operational consequence appears immediate. One widely shared quote from the order says the government was told to “rip out” Anthropic AI from its operations, according to the quoted order. A separate post also shows outside groups are already organizing additional amicus support around the case via a support call, suggesting this will be watched as a procurement and platform-access dispute, not just a speech fight.
LLM Debate Benchmark ran 1,162 side-swapped debates across 21 models and ranked Sonnet 4.6 first, ahead of GPT-5.4 high. It adds a stronger adversarial eval pattern for judge or debate systems, but you should still inspect content-block rates and judge selection when reading the leaderboard.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
In a rare moment of unity, rivals are becoming allies. More than 30 experts from OpenAI and Google including Google’s Jeff Dean just filed a brief supporting Anthropic in its massive lawsuit against the U.S. Government.
Anthropic is taking the U.S. government to federal court over its controversial decision to blacklist the AI giant! Anthropic argues that being labeled a "supply chain risk", a designation normally reserved for hostile foreign adversaries is a gross overreach of government Show more
Anthropic takes U.S. government to court. Anthropic filed lawsuits in 2 separate courts trying to get that blacklist label removed and stop agencies from cutting them off. They say the supply chain risk tag is meant for foreign bad actors, not for punishing an American company Show more
Anthropic CEO Dario Amodei: Human soldiers who follow established military norms and can refuse illegal orders. But "what if you have an army of 10 mn drones instead of 10 mn human soldiers?" The drones lack the intrinsic moral agency of human troops. pic.x.com/HXU39b2lcQ
"The order formally instructs the federal government to "rip out" Anthropic's AI from its operations"
BREAKING: The Trump Administration is preparing an Executive Order to "weed out" Anthropic, per Axios. The order formally instructs the federal government to "rip out" Anthropic's AI from its operations.