MCP with Cursor and Windsurf: IDE Integration Guide 2026

Connect Model Context Protocol servers to Cursor and Windsurf IDEs. Configure MCP tools, debug connections, and boost AI coding workflows in 20 min.

Problem: Your AI IDE Can't See Your Tools

Cursor and Windsurf are powerful AI coding environments — but out of the box, their AI agents only know what's in your codebase. They can't query your database, read Notion docs, call your internal APIs, or check GitHub issues without leaving the editor.

Model Context Protocol (MCP) fixes this. It lets you wire external tools directly into the AI's context so it can act on real data without you copy-pasting between tabs.

You'll learn:

  • How to configure MCP servers in both Cursor and Windsurf
  • How to connect real tools: filesystem, GitHub, PostgreSQL, and custom servers
  • How to debug broken MCP connections when the AI silently ignores your tools

Time: 20 min | Difficulty: Intermediate


Why MCP Changes How AI Coding Works

Without MCP, your AI assistant answers questions using only what you paste in. With MCP, it can call tools — read a file, run a query, fetch a GitHub issue — and reason over the result in the same turn.

MCP uses a client-server model: your IDE is the client, and each external tool runs as an MCP server. The IDE discovers servers from a config file, connects over stdio or HTTP/SSE, and exposes their tools to the AI at inference time.

The practical result: instead of "here's my schema, write a migration," you say "check the current schema and write a migration" — and the AI actually reads it.


Step 1: Understand the MCP Config Format

Both Cursor and Windsurf use a JSON config file to declare MCP servers. The structure is the same across both IDEs — only the file location differs.

{
  "mcpServers": {
    "server-name": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/root"],
      "env": {
        "OPTIONAL_ENV_VAR": "value"
      }
    }
  }
}

Each key under mcpServers is an arbitrary name you choose. The command + args is what the IDE spawns as a child process. The server communicates over stdio by default.

Three transport types you'll encounter:

TransportWhen to use
stdioLocal servers (filesystem, local DB) — most common
http+sseRemote or containerized servers
websocketLess common; some custom servers

For this guide, all examples use stdio unless noted.


Step 2: Configure MCP in Cursor

Cursor reads MCP config from two locations. Project-level config overrides global config.

Global config (applies to all projects):

~/.cursor/mcp.json

Project-level config (checked into your repo):

.cursor/mcp.json

Create the global config first:

mkdir -p ~/.cursor
touch ~/.cursor/mcp.json

Add your first server — the official filesystem server is the easiest to test with:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/yourname/projects"
      ]
    }
  }
}
# Verify npx can resolve the package before Cursor tries
npx -y @modelcontextprotocol/server-filesystem --help

Expected output: A list of available tools like read_file, write_file, list_directory.

Now reload Cursor: Cmd+Shift+P → "Reload Window". Open the Composer (Cmd+I) and check the tool icon — connected MCP servers appear as available tools in the sidebar.

If tools don't appear:

  • No tool icon in Composer → MCP is disabled; go to Settings → Features → Enable MCP
  • Icon shows but server missing → JSON syntax error; validate with cat ~/.cursor/mcp.json | python3 -m json.tool
  • Server listed but disconnected → Check the Output panel: View → Output → "MCP Logs"

Step 3: Configure MCP in Windsurf

Windsurf stores its MCP config here:

~/.codeium/windsurf/mcp_config.json

The format is identical to Cursor. Create it if it doesn't exist:

mkdir -p ~/.codeium/windsurf
touch ~/.codeium/windsurf/mcp_config.json

Add the filesystem server to test:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/yourname/projects"
      ]
    }
  }
}

Reload Windsurf and open a Cascade chat. MCP tools appear as a plug icon in the Cascade toolbar. Click it to see connected servers and their available tools.

If a server shows as "failed":

  • Open the Windsurf Output panel → "Cascade MCP"
  • The raw stderr from the server process appears here — this is where most debug info lives

Step 4: Add a GitHub MCP Server

The filesystem server proves the plumbing works. GitHub is where MCP gets genuinely useful — the AI can read issues, PRs, and repo state without you copying anything.

Install the GitHub MCP server:

npm install -g @modelcontextprotocol/server-github

Create a GitHub Personal Access Token with repo and read:org scopes at github.com/settings/tokens. Fine-grained tokens work; set permissions to "Contents: Read" and "Issues: Read/Write" at minimum.

Add to your MCP config (same file for Cursor or Windsurf):

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/yourname/projects"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
      }
    }
  }
}

Reload the IDE. Now test it in the AI chat:

List the open issues in myorg/myrepo, sorted by most recently updated

The AI calls list_issues via MCP and returns real data — no copy-paste, no context switching.


Step 5: Add a PostgreSQL MCP Server

Database access is the highest-leverage MCP integration for most backend developers. The AI can inspect your schema, check data, and write accurate migrations.

npm install -g @modelcontextprotocol/server-postgres

Add to config:

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://localhost:5432/mydb"
      ]
    }
  }
}

For databases with a password:

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres"],
      "env": {
        "POSTGRES_CONNECTION_STRING": "postgresql://user:password@localhost:5432/mydb"
      }
    }
  }
}

Security note: Never commit connection strings to a project-level .cursor/mcp.json. Use ~/.cursor/mcp.json for secrets, or reference environment variables that your shell already exports.

Test it:

What tables exist in the database and what are the column types for the users table?

The AI calls list_tables and describe_table and gives you a schema summary it actually read — not hallucinated.


Step 6: Use a Remote MCP Server (HTTP/SSE)

Some MCP servers run as persistent HTTP services rather than spawned processes. This is common for team-shared servers or containerized tools.

{
  "mcpServers": {
    "my-internal-api": {
      "url": "http://localhost:3100/sse",
      "transport": "sse"
    }
  }
}

For authenticated remote servers:

{
  "mcpServers": {
    "my-internal-api": {
      "url": "https://mcp.internal.company.com/sse",
      "transport": "sse",
      "headers": {
        "Authorization": "Bearer your-api-key"
      }
    }
  }
}

Windsurf note: As of Windsurf 1.x, SSE transport is supported but sometimes requires a manual reconnect after the server restarts. If tools disappear, toggle the MCP connection off and on in the Cascade toolbar.


Step 7: Build a Minimal Custom MCP Server

When no existing server covers your tool, build one. The MCP SDK makes this straightforward.

mkdir my-mcp-server && cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk
// index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

const server = new Server(
  { name: "my-tool-server", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

// Declare available tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: "get_deployment_status",
      description: "Returns current deployment status for a service",
      inputSchema: {
        type: "object",
        properties: {
          service: { type: "string", description: "Service name" },
        },
        required: ["service"],
      },
    },
  ],
}));

// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "get_deployment_status") {
    const service = request.params.arguments?.service as string;

    // Replace with your real logic: fetch from CI/CD API, k8s, etc.
    const status = await fetchDeploymentStatus(service);

    return {
      content: [{ type: "text", text: JSON.stringify(status) }],
    };
  }
  throw new Error(`Unknown tool: ${request.params.name}`);
});

async function fetchDeploymentStatus(service: string) {
  // Your actual API call here
  return { service, status: "healthy", lastDeploy: new Date().toISOString() };
}

// Start server
const transport = new StdioServerTransport();
await server.connect(transport);

Build and register it:

npx tsc && node dist/index.js  # smoke test

# Add to mcp.json
{
  "mcpServers": {
    "my-tool-server": {
      "command": "node",
      "args": ["/absolute/path/to/my-mcp-server/dist/index.js"]
    }
  }
}

Use absolute paths — relative paths fail silently because the IDE spawns the process from a different working directory.


Verification

After configuring all servers, verify the full setup:

In Cursor:

Open Composer → click the Tools icon → confirm each server shows "Connected" with its tool count

In Windsurf:

Open Cascade → click the plug icon → each server should show green with tool names listed

Test all three server types with a single prompt:

List the files in my projects directory, check GitHub for open PRs in myorg/myrepo,
and tell me what tables exist in my database

You should see: The AI making three separate tool calls and synthesizing the results into one response.

Debug checklist if something is broken:

SymptomFix
Server listed as "connecting" foreverCheck if node / npx is in PATH — IDEs sometimes use a different PATH than your shell
Tools appear but AI never calls themRephrase the prompt to explicitly mention the tool name; some models are conservative about tool use
SSE server disconnects after idleAdd a keep-alive ping to your server, or configure reconnect: true in the config
Works in Cursor, not WindsurfWindsurf config path is different — confirm you edited ~/.codeium/windsurf/mcp_config.json

What You Learned

  • MCP config format is identical across Cursor and Windsurf — only the file path differs
  • stdio servers are spawned per-session; SSE servers are persistent and shared
  • Never store secrets in project-level MCP config files
  • Use absolute paths for locally-built MCP servers — relative paths fail silently
  • The MCP Logs output panel is the fastest way to diagnose connection failures

When not to use MCP: For one-off lookups, copy-pasting is faster than setting up a server. MCP pays off when you're querying the same external system repeatedly across a session or a team.

Tested on Cursor 0.47, Windsurf 1.9, @modelcontextprotocol/sdk 1.8, Node.js 22, macOS Sequoia and Ubuntu 24.04