Problem: Cursor's AI Can't Touch Your Real Tools
By default, Cursor's AI only sees your code files. It can't query your database, call your internal API, read your Notion docs, or run a shell command unless you paste output manually.
MCP (Model Context Protocol) fixes this. It gives Cursor's Agent mode direct access to external tools — and the AI calls them automatically during your session.
You'll learn:
- How MCP works inside Cursor's Agent mode
- How to configure local and remote MCP servers in
.cursor/mcp.json - Three working integrations: filesystem, PostgreSQL, and a custom HTTP API
Time: 20 min | Difficulty: Intermediate
Why MCP Changes How Cursor Works
Without MCP, Cursor's AI is read-only on your codebase. You describe what a function should do; the AI writes it blind.
With MCP, the AI can call a tool mid-conversation. Ask it to "write a migration that matches the current schema" — it queries your live database, reads the tables, then writes the migration. No copy-paste.
MCP is a JSON-RPC protocol. Each MCP server exposes a list of named tools. Cursor's Agent sends a tools/call request; the server executes and returns results. The AI sees the output as context, then continues.
What this looks like in practice:
You: "Add an index for the orders table, check what columns exist first"
Cursor: [calls mcp tool: postgres_query("SELECT column_name FROM information_schema.columns WHERE table_name='orders'")]
[receives column list]
"Here's the migration based on your current schema..."
How Cursor Loads MCP Servers
Cursor reads MCP config from two locations:
| File | Scope |
|---|---|
~/.cursor/mcp.json | Global — all projects |
.cursor/mcp.json | Project-local — checked into repo |
Project-local config takes precedence for overlapping server names. Both files use the same JSON schema.
The config format:
{
"mcpServers": {
"server-name": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"],
"env": {
"OPTIONAL_VAR": "value"
}
}
}
}
Each server entry needs either:
command+args— for stdio servers (spawned as child processes)url— for SSE/HTTP servers (remote or locally hosted)
Cursor auto-starts stdio servers when Agent mode activates. No manual npm start required.
Solution
Step 1: Create Your MCP Config File
For a project-scoped setup (recommended — keeps config in version control):
mkdir -p .cursor
touch .cursor/mcp.json
Start with an empty valid config:
{
"mcpServers": {}
}
Open Cursor, go to Settings → Features → MCP and confirm "MCP enabled" is toggled on. Cursor 0.43+ has this by default.
Step 2: Add the Filesystem Server
The official @modelcontextprotocol/server-filesystem server gives the AI read/write access to a directory you specify.
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/you/projects/myapp"
]
}
}
}
Why specify the path explicitly: MCP filesystem servers sandbox access to the directory you pass. Don't use / — scope it to your project root.
Save the file, then reload the Cursor window (Cmd+Shift+P → "Reload Window"). Open Agent mode (Cmd+I) and ask:
List all TypeScript files in the src/ directory
Expected: The AI calls list_directory and returns actual file names — not a guess.
If it fails:
spawn npx ENOENT→ Node.js isn't on your PATH. Runwhich nodein terminal. If missing, install vianvm.- Server shows "failed" in MCP settings → Check the exact path in args exists. Relative paths don't work here — use absolute.
Step 3: Add a PostgreSQL Server
The @modelcontextprotocol/server-postgres server lets the AI run read-only queries against your database.
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/you/projects/myapp"
]
},
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://localhost:5432/mydb"
]
}
}
}
This server exposes two tools: query (runs SQL) and list_tables.
Security note: The connection string is in plaintext. For local dev this is fine. For shared repos, use environment variable substitution:
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "${DATABASE_URL}"
}
}
Cursor interpolates ${VAR} from your shell environment when spawning the server.
Test it in Agent mode:
What tables are in my database and what are their row counts?
Expected output:
Calling list_tables...
Found: users (1,204 rows), orders (8,432 rows), products (342 rows)
If it fails:
connection refused→ Your Postgres isn't running locally. Start it:brew services start postgresqlorpg_ctl start.role "you" does not exist→ The connection string user doesn't match your DB user. Check withpsql -l.
Step 4: Add a Custom HTTP API Server
For internal APIs or tools without a published MCP server, you have two options: write a stdio server in Node/Python, or run a local SSE server.
Here's a minimal Node.js stdio MCP server that wraps a REST API:
// mcp-servers/internal-api.js
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new Server(
{ name: "internal-api", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler("tools/list", async () => ({
tools: [
{
name: "get_feature_flags",
description: "Fetch current feature flags from the internal config API",
inputSchema: {
type: "object",
properties: {
environment: { type: "string", enum: ["dev", "staging", "prod"] }
},
required: ["environment"]
}
}
]
}));
server.setRequestHandler("tools/call", async (request) => {
if (request.params.name === "get_feature_flags") {
const env = request.params.arguments.environment;
// Replace with your actual internal API call
const response = await fetch(`http://localhost:3001/flags?env=${env}`);
const flags = await response.json();
return {
content: [{ type: "text", text: JSON.stringify(flags, null, 2) }]
};
}
});
const transport = new StdioServerTransport();
await server.connect(transport);
Install the SDK:
npm install @modelcontextprotocol/sdk
Register it in .cursor/mcp.json:
{
"mcpServers": {
"filesystem": { "...": "..." },
"postgres": { "...": "..." },
"internal-api": {
"command": "node",
"args": ["./mcp-servers/internal-api.js"]
}
}
}
Now the AI can call your internal API mid-conversation without you copy-pasting JSON responses.
Step 5: Verify All Servers Are Active
In Cursor, open Settings → Features → MCP. Each server should show a green dot and list its available tools.
If a server shows red:
- Click the server name to see the error log
- Common fix: the command path is wrong — test it directly in terminal first
- Check Node version: MCP SDK requires Node 18+
node --version # Must be >= 18.0.0
Verification
Open Agent mode and run this end-to-end test:
Check what columns exist in the users table, then write a TypeScript
interface that matches the schema exactly.
You should see:
- Cursor calls
postgres > querywith aSELECT column_name, data_typestatement - Receives column data
- Generates a TypeScript interface with exact field names and types
The whole flow completes without you manually providing the schema.
Production Considerations
Don't expose write tools carelessly. The postgres server is read-only by default — keep it that way for dev. If you need write access, scope it to a test database only.
Log what the AI calls. Add a console.error log line to your custom server's tools/call handler. Logs go to stderr, which Cursor captures and shows in the MCP panel.
Commit .cursor/mcp.json to your repo. Team members get the same tool setup automatically. Use ${ENV_VAR} for secrets so nobody commits credentials.
Check server startup time. Cursor spawns stdio servers once per session. If your custom server takes > 5 seconds to start (cold npm install, slow API auth), the first Agent call will feel laggy. Pre-install dependencies and keep startup synchronous.
What You Learned
- MCP gives Cursor's Agent real tool access — database, filesystem, APIs — not just code context
.cursor/mcp.jsonis the config file; project-level config is version-controllable- Stdio servers are child processes Cursor manages automatically;
npx -yhandles install - Custom servers follow a simple request-handler pattern with the official MCP SDK
- Read-only tool access is safe for dev; scope write access carefully
Tested on Cursor 0.43, Node 20.11, macOS 15 and Ubuntu 24.04