What is Cursor?
Cursor is an AI-native code editor built by Anysphere that forked VS Code to gain architectural control impossible through extensions. Unlike GitHub Copilot (which operates as a plugin), Cursor can rewrite the editor itself—enabling features like Shadow Workspace verification, inline diffs, and multi-file edits that extensions cannot replicate. It reached $1B ARR in 24 months at a $29B valuation, making it the fastest-growing SaaS company in history.
In November 2025, Cursor announced $1 billion in Annual Recurring Revenue. They reached this milestone in approximately 24 months—faster than any SaaS company in history.
Wiz took 18 months to hit $100M ARR. Deel took 20 months. Cursor hit $100M in about 12 months, then added another $900M in the next year.
The company behind Cursor, Anysphere, raised a $2.3 billion Series D that valued them at $29.3 billion. Investors included Accel, Coatue, Google, and Nvidia.
The question isn't whether Cursor is successful. It's how a VS Code fork beat GitHub and Microsoft at their own game.
The answer lies in a controversial decision made in 2022: instead of building another plugin, four MIT students decided to fork VS Code entirely—and in doing so, gained "root access" to the developer workflow.
The Extension Trap
To understand why Cursor exists, you need to understand the limitations of the VS Code Extension API.
GitHub Copilot is an extension. It operates within a sandbox defined by Microsoft. This sandbox provides stability—a crashing plugin won't kill the editor—but it imposes fundamental constraints:
Limited UI Control. Extensions can contribute to the sidebar or show simple widgets. They cannot redesign the editor interface or change how text is rendered. Cursor's inline diff view—green for additions, red for deletions, rendered directly in the active file—requires modifying the editor's rendering logic. Extensions can't do this.
Process Isolation. Extensions run in an "Extension Host" process, separate from the Renderer (UI) and Main (Kernel) processes. This isolation is fatal for high-bandwidth AI interaction. Every piece of data must be serialized and passed between processes.
Context Blindness. An extension cannot cheaply access the full state of the window, terminal history, or file system. It has to "ask" the editor for data through official APIs. This adds latency and limits what's possible.
Copilot lives within these constraints. It's a sophisticated autocomplete that helps developers write code faster. But it can't fundamentally change how development works.
The Fork Decision
Anysphere was founded in 2022 by four MIT friends: Michael Truell (CEO), Sualeh Asif (CPO), Aman Sanger (COO), and Arvid Lunnemark (CTO, who later departed in late 2025 to found Integrous Research).
They didn't start with code editors. Their first attempt was applying LLMs to Computer-Aided Design—AI for hardware engineers. The friction was immediate: no structured training data, complex 3D geometry, and they weren't their own users.
The pivot came when they realized two things. First, they were arguably the world's most demanding users of AI coding tools—competitive programmers who spent their lives in editors. Second, software development is the perfect AI application domain: text is the interface, and GitHub provides unlimited training data.
Their thesis: the feedback loop between developer intent and code execution was too slow. Copilot acted as autocomplete, but the developer still had to drive. They wanted AI as a pair programmer with agency—capable of navigating files, understanding project-wide context, and executing terminal commands.
To achieve this, they couldn't be constrained by the extension API. So they made the controversial decision to fork VS Code.
VS Code is open-source under the MIT license. By forking it, Anysphere gained access to the C++ and TypeScript internals of the editor. This "root access" allowed them to implement features that are physically impossible for a plugin:
The Shadow Workspace. They could spawn hidden, parallel instances of the editor engine to validate code changes in the background.
Native Diff Rendering. Instead of clunky side-by-side diffs, Cursor renders AI suggestions as inline color-coded overlays directly in the active file.
Terminal Interception. Cursor can read terminal output natively and inject commands, allowing the AI to fix compilation errors by seeing the exact error and running the fix automatically.
Tab "Teleportation." Cursor's predictive model anticipates not just the next text, but the next cursor position. If you type a line and it predicts you need to edit a corresponding line ten rows down, it animates your cursor there. An extension can't move the user's cursor like this without disorienting them.
The downside: VS Code updates monthly. Anysphere must constantly merge upstream changes to maintain compatibility with the VS Code extension ecosystem. If they drift too far, extensions break. A dedicated team exists just for "keeping the lights on" with upstream merges.
It's a tax. But it's a tax that bought them architectural freedom worth $29 billion.
The Funding Velocity
The capital trajectory mirrors the trajectory of Generative AI itself:
| Round | Date | Amount | Valuation |
|---|---|---|---|
| Seed | 2023 | $8M | — |
| Series A | Aug 2024 | $60M | $400M |
| Series B | Dec 2024 | $105M | $2.5B |
| Series C | May 2025 | $900M | $9.9B |
| Series D | Nov 2025 | $2.3B | $29.3B |
The seed round was led by the OpenAI Startup Fund, with participation from Nat Friedman (former GitHub CEO) and Arash Ferdowsi (Dropbox co-founder). This early OpenAI relationship granted access to frontier models before they were widely available.
Andreessen Horowitz and Thrive Capital led the Series A, then doubled down in Series B. The Series D brought in Accel and Coatue as co-leads, with Google and Nvidia as strategic investors.
The valuation step-ups are staggering: $400M to $2.5B in four months. $2.5B to $9.9B in five months. $9.9B to $29.3B in six months.
The Shadow Workspace
The Shadow Workspace is the architectural innovation that separates Cursor from every other AI coding tool.
The problem it solves: AI generates code that looks correct but fails to compile. The "hallucination loop" where you accept a suggestion, try to run it, discover it references a non-existent variable, ask the AI to fix it, and repeat. This loop is slow and frustrating—a variant of the Context Starvation failure mode that plagues most AI coding tools.
The Shadow Workspace eliminates it by validating code before you ever see it.
The Mechanism
When you ask Cursor's Agent mode to "refactor this API endpoint," it doesn't simply stream text into your buffer. It initiates a background process:
-
Spawn Hidden Window. The main Electron process spawns a secondary, invisible window (show: false). This window loads the same project folder.
-
State Replication. It replicates your unsaved changes (dirty buffers) to the shadow instance so the AI works on current state, not just what's on disk.
-
AI Modification. The AI agent applies proposed edits to files inside the shadow window.
-
LSP Interrogation. Because the shadow window is a fully functioning VS Code environment, standard Language Server Protocol plugins (TypeScript Server, Rust Analyzer, Pylance) automatically run on the modified code.
-
Feedback Loop. The shadow window captures diagnostics—type errors, linter warnings. If errors are found ("Property 'email' does not exist on type 'User'"), these are fed back to the AI as a new prompt: "Your code caused the following TypeScript error:... Fix it."
-
Self-Correction. The AI iterates within the shadow workspace, applying fixes until the linter is silent.
-
Presentation. Only after the code passes validation is it presented to you in the main window.
Why This Matters
The user experience is "code that just works." You describe what you want, Cursor shows you valid code. The iteration happened invisibly.
An extension cannot do this. It can't spawn hidden editor instances. It can't intercept the Language Server Protocol at the kernel level. It can't smooth-animate cursor teleportation. These require root access to the editor internals.
Technical Challenges
Running two VS Code instances is resource-intensive. Electron is notorious for RAM usage. Cursor optimizes by sharing the extension host process where possible and killing the shadow instance aggressively when idle.
Speed matters—users can't wait 30 seconds for validated code. Cursor uses a proprietary "Fast Apply" model, likely a fine-tuned 70B parameter model hosted on Fireworks AI, trained specifically to apply diffs and fix linter errors at ~1000 tokens/second using speculative decoding.
The rust-analyzer language server initially struggled with the shadow workspace's virtual file system because it relied on on-disk file watching. Anysphere engineered workarounds to force the language server to accept in-memory file events.
For the broader pattern of shadow loops in agent architectures, see Vertical Agents Are Winning.
The Semantic Index
GitHub Copilot historically relied on a "neighboring tabs" heuristic—looking at files you have open to guess context. Cursor built a dedicated Retrieval-Augmented Generation pipeline designed specifically for code.
Architecture
Cursor doesn't treat code as plain text. It understands the Abstract Syntax Tree.
Semantic Chunking. Instead of cutting text every 500 tokens, Cursor parses the code and chunks by logical boundaries: a complete function, a class definition, an interface. When a chunk is retrieved, it contains a complete thought.
Embeddings. Chunks are converted to vectors using code-optimized embedding models. Metadata (file path, start line, end line) is stored alongside the vector.
Vector Database. The index is a local vector database (likely Turbopuffer or equivalent) that stays in sync with file changes.
Retrieval
When you type @Codebase in the chat:
- Your question ("Where do we handle auth?") is converted to a vector
- Hybrid search: dense retrieval (vector similarity) + sparse search (BM25 keyword matching)
- Results are reranked—prioritizing files that are "central" to the repository or recently edited
- Relevant chunks are assembled into the LLM's context window
Because Cursor uses models with 200k+ token context windows (Gemini, Claude), it can include significant portions of your codebase in a single prompt.
Privacy
For enterprise security, Cursor respects .cursorignore (like .gitignore). In Privacy Mode, the index exists but code sent to the cloud for inference is ephemeral—Anysphere stores no logs. They've signed Zero Data Retention agreements with OpenAI and Anthropic.
The Claude Pivot
One of Cursor's most decisive victories was refusing to be tied to a single model provider.
The frontier of code generation models shifts rapidly. When Anthropic's Claude models demonstrated superior reasoning for complex refactoring and lower hallucination rates, Cursor could integrate them immediately. When OpenAI released new reasoning models, Cursor added support within days. When open-source alternatives like DeepSeek became viable, Cursor supported them.
Copilot is locked to OpenAI through Microsoft's investment. When competing models prove superior for specific coding tasks, Copilot users have no choice.
Cursor, controlling the router between user and model, can make any frontier model available instantly. This agility captured the "power user" segment that follows state-of-the-art closely and demands access to the best tool for each task.
Proprietary Models
Cursor isn't just an API wrapper. They use custom models for latency-sensitive tasks:
Copilot++ (Tab). A custom model trained to predict the next edit, not just the next text. It predicts deletions and cursor movements. This runs on Fireworks AI infrastructure optimized for extreme low latency.
Speculative Decoding. A small model "drafts" code while a large model "verifies" in parallel. This creates the illusion of instant generation.
The model is the commodity. The router and the custom models are the product.
Vibe Coding
The term "Vibe Coding" was coined by Andrej Karpathy in early 2025: "You fully give in to the vibes, embrace exponentials, and forget that the code even exists."
This describes a new programming paradigm where the developer focuses on high-level intent, entrusting implementation details to the AI.
The Workflow
Cmd+K (Inline Edit). Instead of deleting a function and rewriting it, highlight it and type "make this async and handle errors."
Cmd+L (Chat). Instead of Googling an error, paste the error log into chat. Cursor analyzes the stack trace against your actual code and suggests a fix. While chat works for this, the broader question is whether conversational interfaces are even optimal for development work—Beyond Chat Interfaces explores how ambient copilots and generative UI often outperform traditional chat for productivity tasks.
Tab (Prediction). Type the first character of a line, Cursor predicts the next three lines. Hit Tab to accept. Less writing, more navigating.
Composer. Write a high-level prompt: "Create a new settings page with dark mode toggle and save preference to database." Cursor searches the codebase, creates new files, modifies existing ones, and presents changes as a unified "Save All" operation.
The New User
This workflow birthed a new demographic: the Product Manager Developer. People who understand product logic but lack deep syntax knowledge. Using Cursor, they build production-grade applications by chaining AI commands: "Add a Stripe checkout button here," "Fix the alignment of this div."
This expands the Total Addressable Market beyond traditional developers. It's a key driver of the $1B revenue.
Cursor vs GitHub Copilot
| Feature | Cursor | GitHub Copilot |
|---|---|---|
| Architecture | VS Code Fork | VS Code Extension |
| Context | Full Codebase RAG | Neighboring tabs (improving) |
| Multi-File Edits | Composer (native) | Copilot Edits (recent) |
| Model Choice | Agnostic | OpenAI-centric (expanding) |
| Verification | Shadow Workspace | None |
| Pricing | $20/mo + usage | $10-19/mo |
Copilot's Response
GitHub is not ignoring the threat.
Copilot Workspace moves the environment to the cloud—a web-based "planning room" for brainstorming features, generating plans, and applying them. This sidesteps the local machine constraints Cursor exploits.
Copilot Edits adds multi-file editing, mimicking Composer. But it lacks the Shadow Workspace verification loop—Copilot's edits are more prone to breaking the build.
Multi-Model Support was recently announced for Claude and Gemini, eroding Cursor's model agnosticism advantage.
Copilot's Moat
Distribution is GitHub's moat. Copilot is bundled with GitHub Enterprise. For a CTO, enabling Copilot for 5,000 engineers is a checkbox. Deploying Cursor requires 5,000 engineers to uninstall their approved IDE and install a new binary.
This friction is Anysphere's biggest headwind. But individual developers are swiping corporate credit cards for the $20/month Pro plan. Once critical mass within an organization adopts Cursor, enterprise contracts follow. Product-led growth is bypassing the CTO.
Enterprise Features
To justify a $29B valuation, Cursor must win the Global 2000.
SOC 2 Type II. Non-negotiable badge for enterprise sales. Cursor has it.
Privacy Mode. Code sent to models is processed in memory and immediately discarded. Never written to disk, never used for training. Zero Data Retention agreements with OpenAI and Anthropic.
The Zscaler Problem. Corporate firewalls often perform SSL inspection, breaking Cursor's HTTP/2 streaming connections. Anysphere engineered HTTP/1.1 fallback modes and provides documentation for IT departments to whitelist their domains.
Admin Controls. SSO enforcement, usage governance (cap tokens per developer), remote indexing for massive monorepos that are too big to index on a laptop.
Criticisms and Limitations
Despite the hype, legitimate concerns exist.
The Pricing Controversy (June 2025)
Cursor faced backlash when it shifted from "unlimited" fast requests to a usage-based credit system. Heavy users—those engaged in extensive vibe coding—hit caps quickly and were forced into "slow pools" or asked to pay overages.
This shattered the illusion of "all-you-can-eat" AI and highlighted the brutal economics of wrapping expensive APIs. When your product is a thin layer over Claude, your margins are constrained by Anthropic's pricing.
For the broader implications of API wrapper economics, see Agent Economics.
The Fork Tax
Maintaining a fork is expensive. Every VS Code security patch and feature release must be merged. There have been instances where Cursor users were stuck on older VS Code versions, rendering newer extensions incompatible. This "lag" is persistent friction.
Resource Consumption
The Shadow Workspace doubles memory footprint for certain operations. On older laptops, Cursor feels sluggish compared to stock VS Code. This creates a barrier for developers without high-end hardware.
The Thesis
Cursor's rise validates the vertical agent thesis: the interface and the intelligence cannot be decoupled.
To realize the full potential of AI in complex knowledge work, you must control the entire environment—not just the model that generates text. Copilot is constrained by Microsoft's extension API. Cursor, by forking VS Code, paid a maintenance tax but gained architectural freedom.
The Shadow Workspace, semantic indexing, model agnosticism, speculative editing—none of these are possible as plugins. They require root access.
This pattern extends beyond code editors. Harvey controls the legal research environment. Abridge controls the clinical documentation environment. The winners in AI aren't building the best models; they're building the best environments for models to operate in.
GitHub has distribution. Cursor has architecture. In the short term, distribution wins deals. In the long term, architecture wins the category.
The future of software development isn't typing. It's decision-making. Cursor has built the first operating system for that future.
For the broader pattern of vertical agents outcompeting horizontal assistants, see Vertical Agents Are Winning. For Cognition's different bet on autonomous coding agents, see Devin Deep Dive.