Cursor and Kimi said Composer 2 starts from Kimi K2.5, with continued pretraining and RL added on top after developers spotted Kimi model IDs in traffic. Teams should benchmark it as a productized open-base stack, not a from-scratch model.

accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast in Cursor traffic, and Cursor later confirmed Composer 2 starts from Kimi K2.5 rather than a from-scratch base model, according to the API capture and Cursor's reply.The immediate trigger was a developer-circulated traffic capture whose request dump showed the model field accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast inside Cursor's /chat/completions call. The same capture also showed Cursor's coding-assistant system prompt and tool use, which made the claim testable rather than pure speculation.
Kimi then confirmed that Composer 2 uses Kimi K2.5 as its foundation. In Kimi's words, Cursor added “continued pretraining & high-compute RL training” and accessed the model through Fireworks “as part of an authorized commercial partnership” Kimi statement. Cursor's own follow-up matched that account, saying it was “a miss” not to name the Kimi base model in the original blog and that the team would “fix that for the next model” Cursor's reply.
Cursor says the stack started with base-model selection using “perplexity-based evals,” where Kimi K2.5 “proved to be the strongest” Cursor training details. From there, the team says it ran continued pretraining and then “high-compute RL,” described as “a 4x scale-up,” with Fireworks providing both inference and “RL samplers” Cursor training details.
That framing is important for engineers because it narrows what Composer 2 actually represents: not a net-new frontier pretraining run, but an aggressively adapted coding model built from an existing open-weight base plus post-training and product integration. Fireworks also appears in the rollout path beyond plain hosting; a Fireworks-linked post said “it's not just inference but also RL” on the platform Fireworks launch RT. Separately, Cursor increased capacity right after launch, with team members posting “2x more usage all weekend” and “We're giving everyone 2x usage,” which suggests heavy demand during the release window weekend capacity boost 2x usage post.
For teams benchmarking coding agents, the practical takeaway is attribution and comparability. If Composer 2 is Kimi K2.5 plus continued pretraining, RL, and Cursor's agent product layer, then comparisons against other coding models should separate base-model quality from post-training, serving, and tool orchestration. Cursor itself now describes the result as “the strong base, CPT and RL, and Fireworks' inference and RL samplers” rather than a from-scratch model effort Cursor's reply.
The controversy was mostly about disclosure, not licensing. Practitioners' criticism focused on the launch blog omitting a “direct reference to Kimi K2” Transparency reaction, while broader reaction argued the issue only got addressed after community uproar Trust criticism. That distinction matters operationally: the technical story is a credible open-base-to-product pipeline, while the process story is that model provenance became visible first through traffic inspection and only then through official confirmation API sniff Kimi statement.
Vercel Emulate added a programmatic API for creating, resetting, and closing local GitHub, Vercel, and Google emulators inside automated tests. That makes deterministic integration tests easier to wire into CI and agent loops without manual setup.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
Composer 2 is just Kimi K2.5 with reinforcement learning. Someone sniffed the API calls. The model ID is "kimi-k2p5-rl-0317-s515-fast" hosted under Anysphere's account. Cursor isn't training their own model from scratch. They're fine-tuning Kimi K2.5 with RL and calling itShow more
Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support. Show more
🔥 Cursor Composer2 launched on Fireworks 🔥 This time it's not just inference but also RL powered by @FireworksAI_HQ. So much hard work and sleepless nights to get this gift out. Congrats @cursor_ai team on launching this SOTA model beating Opus 4.6 on terminal bench! 🚀 Show more
For transparency reasons, I believe it would have been better to include a direct reference to Kimi K2 in the blog post about Compoaer 2. Furthermore, it also demonstrates how good Chinese open-source models have become.
Since people really want me to say this: "KIMI K2.5" ‼️ Yes, that is the base we started from. And we are following the license through inference partner terms (e.g. Fireworks) I'm thankful for OSS models personally, good for the ecosystem.
We've evaluated a lot of base models on perplexity-based evals and Kimi k2.5 proved to be the strongest! After that, we do continued pre-training and high-compute RL (a 4x scale-up). The combination of the strong base, CPT and RL, and Fireworks' inference and RL samplers make Show more
Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support.