Problem: Cursor Only Shows a Fixed Model List — You Want Your Own
Cursor ships with a curated model list (GPT-4o, Claude Sonnet, etc.), but you may need a model that isn't there — DeepSeek R1 for cost-efficient reasoning, Gemini 2.0 Flash for long context, or a specific Claude version you're already paying for via the Anthropic API.
The fix is Cursor's Custom Models feature. It lets you bring any OpenAI-compatible endpoint into the IDE.
You'll learn:
- How to add Claude, Gemini, and DeepSeek as custom models in Cursor
- The exact model strings and base URLs for each provider
- How to verify the models work and fix the most common errors
Time: 10 min | Difficulty: Beginner
Why Custom Models Exist in Cursor
Cursor routes all AI requests through its own proxy by default. When you add a custom model, you bypass the proxy for that model and hit the provider's API directly using your own key. That means:
- You control cost — charges go to your own API account, not Cursor's usage pool
- You get access to unreleased or niche models — anything the provider exposes via API
- Rate limits are yours — useful if you're on a high-tier API plan
The trade-off: custom models don't count toward Cursor's fast request quota, and some Cursor features (like auto-complete) only work with a subset of supported models.
Solution
Step 1: Open Cursor Model Settings
- Open Cursor
- Press
Cmd + ,(macOS) orCtrl + ,(Windows/Linux) to open Settings - Navigate to Models in the left sidebar
You'll see the default model list at the top and a + Add Model button below it.
Step 2: Add Claude (Anthropic)
Anthropic's API is not OpenAI-compatible natively, but Cursor handles it directly — no base URL override needed.
Model strings for Claude (copy exactly):
| Model | String |
|---|---|
| Claude Opus 4 | claude-opus-4-5 |
| Claude Sonnet 4.5 | claude-sonnet-4-5 |
| Claude Haiku 4.5 | claude-haiku-4-5-20251001 |
In Cursor Settings → Models:
- Click + Add Model
- Enter the model string (e.g.
claude-sonnet-4-5) - Leave Base URL blank — Cursor uses the Anthropic endpoint automatically
- Go to API Keys section in the same Settings panel
- Paste your Anthropic API key into the Anthropic API Key field
API Key field: sk-ant-api03-...
Model string: claude-sonnet-4-5
Base URL: (leave empty)
Click Verify to confirm the key works. You should see a green checkmark.
Step 3: Add Gemini (Google)
Google exposes Gemini via an OpenAI-compatible endpoint. Use this base URL:
https://generativelanguage.googleapis.com/v1beta/openai/
Model strings for Gemini:
| Model | String |
|---|---|
| Gemini 2.5 Pro | gemini-2.5-pro-preview-03-25 |
| Gemini 2.0 Flash | gemini-2.0-flash |
| Gemini 2.0 Flash Lite | gemini-2.0-flash-lite |
In Cursor Settings → Models:
- Click + Add Model
- Enter the model string (e.g.
gemini-2.0-flash) - Set Base URL to
https://generativelanguage.googleapis.com/v1beta/openai/ - In the OpenAI API Key field, paste your Google AI Studio API key
API Key field: AIzaSy...
Model string: gemini-2.0-flash
Base URL: https://generativelanguage.googleapis.com/v1beta/openai/
Get your Google AI Studio key at aistudio.google.com/apikey.
Step 4: Add DeepSeek
DeepSeek's API is OpenAI-compatible and among the cheapest options for reasoning-class models.
Base URL: https://api.deepseek.com/v1
Model strings for DeepSeek:
| Model | String | Use case |
|---|---|---|
| DeepSeek Chat | deepseek-chat | General coding, fast |
| DeepSeek Reasoner | deepseek-reasoner | Complex logic, slower |
In Cursor Settings → Models:
- Click + Add Model
- Enter
deepseek-chatordeepseek-reasoner - Set Base URL to
https://api.deepseek.com/v1 - Paste your DeepSeek API key into the OpenAI API Key field
API Key field: sk-...
Model string: deepseek-chat
Base URL: https://api.deepseek.com/v1
Get your DeepSeek key at platform.deepseek.com.
Step 5: Select the Model in Chat or Composer
Once added, the model appears in the model picker dropdown in:
- Chat panel — click the model name at the bottom of the chat input
- Composer — same dropdown in the Composer toolbar
- Inline edit (
Cmd+K) — model selector in the floating bar
Custom models are marked with a small person icon (indicating they use your API key).
Verification
Open the Chat panel, select your new model, and send a test prompt:
What model are you and who made you?
You should see: The model correctly identifying itself — e.g. "I'm Claude, made by Anthropic" or "I'm Gemini, a large language model from Google."
If the model selector shows the name but responses fail, check the troubleshooting section below.
Troubleshooting
Error: Invalid API Key
- Double-check the key has no trailing spaces
- Confirm the key is active in the provider's dashboard
- For Gemini: make sure you're using an AI Studio key, not a Google Cloud Vertex key
Error: Model not found
- The model string is case-sensitive — copy it exactly from the tables above
- For Gemini: confirm the model is available in your region at ai.google.dev/gemini-api/docs/models
Responses are very slow
- DeepSeek Reasoner and Claude Opus are inherently slower — switch to Flash or Sonnet for everyday tasks
- Check provider status pages for ongoing incidents
Model appears in list but doesn't show in Composer
- Restart Cursor after adding models — the Composer model list sometimes needs a reload
- Confirm
draft: false— no, wrong checklist. Confirm the model is toggled ON in the Models list (the toggle to the left of the model name)
What You Learned
- Cursor supports any OpenAI-compatible API endpoint via the custom model system
- Claude uses Cursor's native Anthropic integration — no base URL needed
- Gemini and DeepSeek both use OpenAI-compatible endpoints with different base URLs
- Custom models bill directly to your API account, not Cursor's usage quota
When to use each model in Cursor:
- Claude Sonnet — best overall for code generation and refactoring
- Gemini 2.0 Flash — fastest option, good for large file context (1M token window)
- DeepSeek Chat — most cost-effective for high-volume usage
- DeepSeek Reasoner — step-by-step debugging of tricky logic bugs
Tested on Cursor 0.48, macOS Sequoia 15.3 and Windows 11