Problem: You're Repeating the Same Instructions Across Every AI Request
MCP Prompts let you define reusable instruction templates once, store them server-side, and invoke them by name — with dynamic arguments. If you're copy-pasting the same system instructions into Cursor, Claude, or your own MCP client every session, you're doing it the hard way.
You'll learn:
- What MCP Prompts are and how they differ from Tools and Resources
- How to define prompt templates with typed arguments in TypeScript
- How to expose multi-turn conversation templates via the MCP protocol
Time: 20 min | Difficulty: Intermediate
Why This Happens
The Model Context Protocol separates concerns into three primitives: Tools (actions the model can invoke), Resources (data the model can read), and Prompts (reusable instruction templates). Most developers wire up Tools first and never touch Prompts — then wonder why they're re-describing their workflows from scratch every session.
MCP Prompts solve this by letting a server advertise a catalog of named prompt templates. The client (Cursor, Claude Desktop, your app) lists them, picks one, fills in arguments, and gets back a fully-formed messages array ready to pass to the model.
When you need MCP Prompts:
- You have system instructions shared across multiple agents or sessions
- Your prompts have dynamic slots — filenames, languages, task types
- You want non-technical teammates to trigger complex workflows without writing prompts
Solution
Step 1: Scaffold an MCP Server with the SDK
# Requires Node >= 20
npx @modelcontextprotocol/create-server my-prompt-server
cd my-prompt-server
npm install
Expected output:
✔ Created MCP server in ./my-prompt-server
✔ Dependencies installed
If it fails:
npx: command not found→ Install Node 20+ vianvm install 20EACCESpermission error → Runnpm config set prefix ~/.npm-globalthen retry
Step 2: Register a Basic Prompt
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import {
ListPromptsRequestSchema,
GetPromptRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
const server = new Server(
{ name: "prompt-server", version: "1.0.0" },
{ capabilities: { prompts: {} } } // declare prompts capability
);
server.setRequestHandler(ListPromptsRequestSchema, async () => {
return {
prompts: [
{
name: "code-review",
description: "Review code for bugs, style, and security issues",
arguments: [
{ name: "language", description: "Programming language", required: true },
{ name: "focus", description: "bugs | security | performance | style", required: false },
],
},
],
};
});
Step 3: Implement the GetPrompt Handler
server.setRequestHandler(GetPromptRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "code-review") {
const language = args?.language ?? "unknown";
const focus = args?.focus ?? "bugs, security, and style";
return {
description: "Code review prompt",
messages: [
{
role: "user",
content: {
type: "text",
text: `You are a senior ${language} engineer. Review the following code for ${focus}.
For each issue found:
1. Quote the problematic line
2. Explain why it is a problem
3. Provide a corrected version`,
},
},
],
};
}
throw new Error(`Prompt not found: ${name}`);
});
Step 4: Add a Multi-Turn Prompt Template
Multi-turn templates let you bake few-shot examples directly into the template before the real task.
// In ListPrompts:
{
name: "sql-optimizer",
description: "Optimize a SQL query with step-by-step explanation",
arguments: [
{ name: "dialect", description: "postgres | mysql | sqlite", required: true },
{ name: "query", description: "The SQL query to optimize", required: true },
],
}
// In GetPrompt:
if (name === "sql-optimizer") {
const dialect = args?.dialect ?? "postgres";
const query = args?.query ?? "";
return {
messages: [
// Few-shot: show expected output format before the real query
{
role: "user",
content: { type: "text", text: "Optimize:\nSELECT * FROM orders WHERE customer_id = 123" },
},
{
role: "assistant",
content: { type: "text", text: "**Issue:** SELECT * fetches all columns.\n**Fix:** SELECT id, total, created_at FROM orders WHERE customer_id = 123\n**Index:** CREATE INDEX ON orders(customer_id)" },
},
// Real task
{
role: "user",
content: { type: "text", text: `Optimize this ${dialect} query:\n${query}` },
},
],
};
}
Step 5: Test with the MCP Inspector
npm run build
npx @modelcontextprotocol/inspector node dist/index.js
In the browser UI: Prompts → code-review → fill args → Get Prompt. Verify the rendered messages array.
If it fails:
Cannot find module→ Runnpm run buildfirst- Blank inspector page → Port conflict on 5173; kill other dev servers
Step 6: Register in Claude Desktop or Cursor
{
"mcpServers": {
"prompt-server": {
"command": "node",
"args": ["/absolute/path/to/my-prompt-server/dist/index.js"]
}
}
}
Save to ~/Library/Application Support/Claude/claude_desktop_config.json (Claude Desktop) or .cursor/mcp.json (Cursor). Restart the client.
Verification
echo '{"jsonrpc":"2.0","id":1,"method":"prompts/list","params":{}}' | node dist/index.js
You should see a JSON response listing code-review and sql-optimizer.
What You Learned
- MCP Prompts are server-advertised, argument-parameterized message templates — not tool calls
ListPromptsbuilds the catalog;GetPromptrenders a filledmessagesarray- Multi-turn templates embed few-shot examples inside a reusable prompt
required: truearguments cause compliant clients to validate before callingGetPrompt
Limitation: Prompts are pull-based — the client must invoke them. They don't auto-inject like a system prompt. For always-on instructions, pair a Prompt with a Resource your client reads at startup.
Tested on @modelcontextprotocol/sdk 1.8.0, Node 22.x, Claude Desktop 0.9.x, Cursor 0.48.x
FAQ
Q: What is the difference between MCP Prompts and MCP Tools? A: Tools are invoked by the model during inference to take actions. Prompts are invoked by the client before inference to structure the conversation. One sets up the request; the other handles what happens during it.
Q: Can MCP Prompts include images or file content?
A: Yes. The content field supports type: "image" and type: "resource" alongside type: "text". Pass a resource URI and the client resolves it before sending to the model.
Q: Do I need to rebuild every time I update a prompt?
A: Only with compiled TypeScript. Use tsx watch src/index.ts for rapid iteration — no compile step during development.
Q: How many prompts can one server expose?
A: No protocol limit. In practice, keep one server per domain (code, SQL, docs) to keep the prompts/list UI manageable.
Q: Do MCP Prompts work with Claude Desktop and Cursor at the same time?
A: Yes. Register the same server config in both claude_desktop_config.json and .cursor/mcp.json — they connect independently. Changes to the server are reflected in both clients after restart.