
Vibe Coding Tool Cursor from Startup anywhereIs introduced the musicianIts first in-house, proprietary coding as part of the Large Language Model (LLM). Cursor 2.0 Platform Update,
Composer is designed to quickly and accurately execute coding tasks in a production-level environment, representing a new step in AI-assisted programming. It is already being used in day-to-day development by Cursor’s own engineering staff – indicating maturity and stability.
According to the cursor, the composer carries out most of the interactions. less than 30 seconds While maintaining a high level of reasoning capability in large and complex codebases.
The model is said to be four times faster than similar intelligent systems and is trained for an “agent” workflow – where autonomous coding agents collaboratively plan, write, test, and review code.
Previously, cursors were supported "vibe coding" – Using AI to write or complete code based on natural language instructions from a user, even someone untrained in development – On top of other leading proprietary LL.M. From OpenAI, Anthropic, Google, and xAI. These options are still available to users.
benchmark results
Composer’s capabilities are benchmarked using "cursor bench," An internal evaluation suite derived from actual developer agent requests. The benchmark measures not only correctness, but also the model’s compliance with existing abstractions, style conventions, and engineering practices.
On this benchmark, Composer achieves frontier-level coding intelligence when generating 250 tokens per second – Nearly twice as fast as the leading fast-guess models and four times faster than comparable Frontier systems.
Published comparisons of cursors group models into several categories: “Best Open” (for example, Quen Coder, GLM 4.6), “Fast Frontier” (Haiku 4.5, Gemini Flash 2.5), “Frontier 7/2025” (the most robust model available at midyear), and “Best Frontier” (including GPT-5 and Cloud Sonnet 4.5). Composer matches the intelligence of mid-range systems while delivering the highest recorded generation speeds among all tested classes.
A model built by combining reinforcement learning and experts’ architecture
Cursor research scientist Sasha Rush provided information about the development of the model Post on social networkDescribing Composer as a reinforcement-learned (RL) mixture-of-experts (MOE) model:
“We used RL to train a big MoE model so that it got really good at real-world coding, and was also very fast.”
Rush explained that the team co-designed both the composer and cursor environments to allow the model to operate efficiently at production scale:
“Unlike other ML systems, you can’t do much differently from a full-scale system. We designed this project and the cursor together to allow the agent to run at the required scale.”
Composer was trained on real software engineering tasks rather than static datasets. During training, the model worked inside the full codebase using a suite of production tools, including file editing, semantic search, and terminal commands, to solve complex engineering problems. Each training iteration involves solving a concrete challenge, such as preparing a code edit, drafting a plan, or creating a targeted explanation.
The reinforcement loop optimized both accuracy and efficiency. Composers learned to make effective instrument choices, use assonance, and avoid unnecessary or imaginary responses. Over time, the model developed emergent behaviors such as running unit tests, fixing linter errors, and performing multi-step code searches autonomously.
This design enables Composer to work in the same runtime context as the end-user, making it more aligned with real-world coding situations—handling version control, dependency management, and iterative testing.
From prototype to production
Composer’s development is known as the internal prototype of the first LeopardUsed to find low-latency heuristics for cursor coding tasks.
“Cheetah was v0 of this model primarily to test speed,” Rush said on X. Our metrics say it (Composer) has the same speed, but is much smarter.
Cheetah’s success in reducing latency helped Cursor identify speed as a key factor in developer trust and usability.
Composer maintains that responsiveness while making significant improvements in reasoning and task generalization.
Developers using Cheetah during early testing noted that its speed changed the way they worked. One user commented that it is “so fast that I can stay in the loop while working with it.”
Composer retains that speed but increases the capability for multi-step coding, refactoring, and testing tasks.
Integration with Cursor 2.0
Composer is fully integrated into Cursor 2.0, a major update to the company’s agentic development environment.
The platform offers a multi-agent interface, allowing Up to eight agents to run in parallel, Each in a separate workspace using git worktrees or remote machines.
Within this system, the musician may act as one or more of those agents, acting independently or collaboratively. Developers can compare multiple results from concurrent agent runs and select the best output.
Cursor 2.0 also includes helpful features that increase Composer’s effectiveness:
-
In-Editor Browser (GA) – Enables agents to run and test their code directly inside the IDE, forwarding DOM information to the model.
-
better code review – Aggregates differ across multiple files for fast inspection of model-generated changes.
-
Sandboxed Terminal (GA) – Isolate agent-run shell commands for safe local execution.
-
voice mode – Adds speech-to-text controls to start or manage an agent session.
While these platform updates expand the overall cursor experience, Composer is positioned as the technical core enabling fast, reliable agentic coding.
Infrastructure and training systems
To train Composer at scale, Cursor built a custom reinforcement learning infrastructure combining PyTorch and Ray for asynchronous training across thousands of NVIDIA GPUs.
The team developed a specialized MXFP8 MOE kernel and hybrid sharded data parallelism, enabling large-scale model updates with minimal communication overhead.
This configuration allows the cursor to train models at lower precision without the need for quantization after training, improving both inference speed and efficiency.
Composer’s training relies on hundreds of thousands of concurrent sandboxed environments running in the cloud – each of which is a self-contained coding workspace. The company optimized its Background Agents infrastructure to dynamically schedule these virtual machines, which supports the rapid nature of large RL runs.
enterprise use
Composer’s performance improvements are supported by infrastructure-level changes to Cursor’s code intelligence stack.
The company has optimized its Language Server Protocol (LSP) for faster diagnostics and navigation, especially in Python and TypeScript projects. These changes reduce latency when Composer interacts with large repositories or produces multi-file updates.
Enterprise users gain administrative control over Composer and other agents through team rules, audit logs, and sandbox enforcement. Cursor also supports pooled model usage, SAML/OIDC authentication, and analytics to monitor agent performance across Teams and Enterprise tier organizations.
Pricing for individual users ranges from Free (hobby) to Ultra ($200/month), with extended usage limits for Pro+ and Ultra customers.
Business pricing for Teams starts at $40 per user per month, with enterprise contracts offering custom usage and compliance options.
The musician’s role in the evolving AI coding landscape
Composer’s focus on speed, reinforcement learning, and integration with live coding workflows sets it apart from other AI development assistants like GitHub Copilot or Replit’s Agent.
Rather than serving as a passive suggestion engine, Composer is designed for continuous, agent-driven collaboration, where multiple autonomous systems interact directly with a project’s codebase.
This model-level expertise—training the AI to function in the real environment in which it will operate—represents an important step toward practical, autonomous software development. Composer is not trained on just text data or static code, but within a dynamic IDE that mirrors production conditions.
Rush described this approach as essential to achieving real-world reliability: the model learns not only to generate code, but also how to integrate, test, and improve it in context.
What this means for enterprise developers and vibe coding
With Composer, Cursor is introducing more than a faster model – it’s deploying an AI system optimized for real-world use, built to operate inside the same tools that developers already rely on.
The combination of reinforcement learning, expert mix design, and tight product integration gives Composer a practical edge in speed and responsiveness that sets it apart from general-purpose language models.
While Cursor 2.0 provides the infrastructure for multi-agent collaboration, Composer is the key innovation that makes those workflows viable.
It’s the first coding model built specifically for agentic, production-level coding – and an early glimpse of what everyday programming could look like when human developers and autonomous models share the same workspace.

