Thinking Machines and NVIDIA announced a multi-year plan to deploy at least 1 gigawatt of Vera Rubin systems for training and customizable AI platforms. Watch it as a marker of how frontier training capacity is concentrating into a few very large infrastructure bets.

Thinking Machines and NVIDIA announced a long-term partnership centered on deploying at least 1 gigawatt of Vera Rubin systems, with Thinking Machines saying the goal is to support “frontier model training” and platforms delivering customizable AI. The company’s announcement page adds two implementation details missing from the short social post: the plan is multi-year, and NVIDIA has also made a “substantial investment” in the startup.
The hardware piece is only part of the deal. According to the announcement, the companies will design training and serving systems optimized for NVIDIA architectures, which makes this closer to a full-stack infrastructure partnership than a standard GPU supply agreement. That same post says the partnership is meant to support access to frontier and open models for enterprises, researchers, and the scientific community.
Thinking Machines also highlighted NVIDIA’s side of the announcement through NVIDIA’s repost, which repeated the “at least 1 gigawatt” figure and tied the deployment directly to frontier AI models. Separately, a supporting post says deployment on the Vera Rubin platform is targeted for early next year.
The headline number matters because it signals a very different planning horizon from ordinary cluster announcements. In one detailed reaction, the project is described as a “1-gigawatt AI supercomputing cluster” built around upcoming Vera Rubin chips, with the argument that this scale forces changes in data-center design, power delivery, and thermal management rather than just server procurement.
That scale also changes the story for model delivery, not just training. A technical summary argues the partnership is about “custom training + inference pipelines,” and says Thinking Machines and NVIDIA are “co-building training and serving systems tuned specifically” to NVIDIA’s stack. Even though that post is interpretive rather than primary sourcing, it tracks the core claim in Thinking Machines’ own language about optimizing both training and serving.
For engineers, the near-term takeaway is not a new API or SDK but a clearer map of where future frontier capacity is being assembled. Vera Rubin deployment is slated for early next year in the rollout note, and Thinking Machines is explicitly pairing that reserved capacity with customizable AI platforms in its launch statement.
NVIDIA introduced a coalition of labs and platform vendors to co-develop open frontier models, including Mistral, LangChain, Perplexity, Cursor, Reflection, Sarvam, and Black Forest Labs. Watch it if you want open-model efforts tied to DGX Cloud, NIM, and production tooling instead of weights alone.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
We are partnering with @nvidia to power our frontier model training and platforms delivering customizable AI. thinkingmachines.ai/news/nvidia-pa…
Thinking Machines Lab and NVIDIA just announced a massive partnership to build a 1-gigawatt AI supercomputing cluster using the upcoming Vera Rubin chips. This project focuses on training the next generation of giant AI models while making them easier for regular people and Show more
Grateful to Jensen and @nvidia team for their support. Together, we’re working to deploy at least 1GW of Vera Rubin systems, bringing adaptable collaborative AI to everyone. thinkingmachines.ai/nvidia-partner…