Meta agreed to buy up to $27 billion of AI infrastructure from Nebius over five years, including $12 billion of dedicated capacity and optional overflow tied to Vera Rubin deployments. Plan for tighter next-generation GPU supply as hyperscalers lock in capacity years ahead of spot demand.

Meta's purchase is a capacity reservation, not a model launch. In Nebius's announcement, the companies describe a five-year agreement under which Nebius will provide $12 billion of dedicated capacity “across multiple locations,” with the infrastructure tied to early large-scale deployment of NVIDIA Vera Rubin.
A second report in the CNBC summary screenshot adds the commercial structure: Meta will spend up to $27 billion total, split between the $12 billion committed block and as much as $15 billion of additional compute it can draw on later. That matters operationally because it gives Meta guaranteed baseline supply plus overflow headroom, a pattern that looks closer to utility capacity planning than opportunistic GPU buying.
The infrastructure angle is the real story. Nebius is tying the deal to Vera Rubin, and the GTC slide shows NVIDIA positioning that platform as a next-generation system architecture rather than a routine refresh. Even without full public deployment details here, “one of the first large-scale deployments” in Nebius's wording implies hyperscalers are reserving upcoming capacity before broad market availability.
That lines up with NVIDIA's own demand framing. In remarks from GTC, Jensen Huang pointed to more than “$1T+” of AI infrastructure growth through 2027, with the accompanying slide calling out inference as a major driver. The immediate takeaway for engineers is that future serving and training economics will be shaped not just by chip specs, but by who locked in supply earliest. Meta's Nebius agreement is a concrete example of that shift.
Miles added ROCm support for AMD Instinct clusters and reported GRPO post-training gains on Qwen3-30B-A3B, including AIME rising from 0.665 to 0.729. It matters if you are evaluating rollout-heavy RL jobs off NVIDIA and want concrete throughput and step-time numbers before porting.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
Nebius and Meta signed a multi-year AI infrastructure agreement. “Under the five-year agreement, Nebius will provide $12 billion of dedicated capacity across multiple locations, based on one of the first large-scale deployments of the NVIDIA Vera Rubin platform.“
„The new very Rubin Plattform“
Meta is spending $27B over the next 5 years to secure massive amounts of AI computing power from the Dutch cloud provider Nebius. This agreement ensures Meta has access to the specialized hardware needed to run its massive AI projects and stay ahead of competitors. The deal Show more
Breaking: 1 trillion revenue for NVIDIA in 2027 Jensen Huang: “One year after last GTC, right here where I stand... I see, going down so much, through 2027. At least... one trillion dollars, you know? Now, does it make any sense? I'm certain computer demand will be much Show more