Problem: Repeating the Same AI Prompts Every Day
You keep typing "review this code for security issues" or "add JSDoc comments" into Cursor's chat - wasting 5 minutes per request on prompts you've refined over weeks.
You'll learn:
- How to create custom slash-commands in Cursor
- Where to store reusable prompt templates
- How to build commands that accept file context
Time: 15 min | Level: Beginner
Why This Matters
Cursor's built-in commands (/edit, /fix) are generic. Custom commands let you encode your team's standards, your specific tech stack, and battle-tested prompts.
Common use cases:
- Code review with your company's checklist
- Generate tests using your preferred framework
- Refactor following your style guide
- Documentation that matches your template
Solution
Step 1: Find Your Cursor Rules File
Cursor reads custom commands from .cursorrules in your project root.
# Create the file if it doesn't exist
touch .cursorrules
Expected: File appears in your project root (same level as .git/)
If it fails:
- No file created: Check you're in the project root:
pwd - Cursor doesn't recognize it: Restart Cursor IDE
Step 2: Write Your First Slash-Command
Open .cursorrules and add this structure:
# .cursorrules
# Code Review Command
/review:
description: "Review code for security, performance, and best practices"
prompt: |
Review the selected code for:
1. **Security issues:**
- SQL injection vulnerabilities
- XSS risks in user input handling
- Hardcoded secrets or API keys
2. **Performance:**
- Unnecessary re-renders (React)
- N+1 queries (database)
- Memory leaks in event listeners
3. **Code quality:**
- Follow Airbnb style guide
- Proper error handling
- TypeScript types (no `any`)
Format as:
- ✅ Looks good / âš ï¸ Issue found
- Specific line numbers
- Suggested fix in code block
Why this works: The YAML structure tells Cursor to register /review as a command. The prompt field is what gets sent to Claude when you invoke it.
Step 3: Test Your Command
- Select some code in Cursor
- Open Composer (Cmd+K / Ctrl+K)
- Type
/reviewand press Enter
Expected: Cursor runs your custom prompt against the selected code and returns structured feedback.
If it fails:
- Command not showing: Save
.cursorrulesand restart Cursor - Syntax error: Check YAML indentation (use 2 spaces, not tabs)
- No output: Verify you selected code before running command
Step 4: Add Context-Aware Commands
Create commands that reference file types or frameworks:
# Generate React Tests
/test-react:
description: "Generate React Testing Library tests"
prompt: |
Generate comprehensive tests for this React component using:
- React Testing Library (not Enzyme)
- Vitest (our test runner)
- user-event for interactions
Include tests for:
1. Component renders without crashing
2. Props are handled correctly
3. User interactions (clicks, typing)
4. Conditional rendering logic
5. Accessibility (ARIA labels)
Follow our pattern:
```typescript
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
describe('ComponentName', () => {
it('should...', async () => {
// Arrange
// Act
// Assert
});
});
```
# Add TypeScript Types
/add-types:
description: "Convert JavaScript to TypeScript with proper types"
prompt: |
Convert this JavaScript code to TypeScript:
1. Add explicit types (no `any`)
2. Use interfaces for objects with 3+ properties
3. Use `type` for unions/primitives
4. Add JSDoc for complex functions
5. Use strict mode compatible types
Don't change logic - only add types.
# Documentation Generator
/docs:
description: "Generate JSDoc/TSDoc documentation"
prompt: |
Add comprehensive documentation:
**For functions:**
```typescript
/**
* Brief description of what it does
*
* @param paramName - What this parameter is
* @returns What gets returned
* @throws {ErrorType} When this error occurs
* @example
* ```typescript
* functionName(example);
* // => expected output
* ```
*/
```
**Include:**
- Purpose and behavior
- All parameters with types
- Return value
- Side effects
- Example usage
Use present tense ("Fetches data" not "Fetch data")
Step 5: Create Team-Specific Commands
Add your org's standards:
# Our API Response Format
/api-format:
description: "Format API responses per company standard"
prompt: |
Refactor this API handler to match our standard response:
```typescript
// Success response
return {
success: true,
data: {...},
meta: {
timestamp: new Date().toISOString(),
requestId: req.id
}
};
// Error response
return {
success: false,
error: {
code: "ERROR_CODE",
message: "User-friendly message",
details: {...} // Only in dev
},
meta: {
timestamp: new Date().toISOString(),
requestId: req.id
}
};
```
Ensure:
- HTTP status codes match (200, 400, 500)
- Errors are logged with context
- No sensitive data in error messages
# Database Query Optimizer
/optimize-query:
description: "Optimize SQL/Prisma queries for performance"
prompt: |
Analyze this database query for:
1. **N+1 problems:**
- Add `include` for relations
- Use `select` to limit fields
2. **Missing indexes:**
- Suggest indexes for WHERE clauses
- Composite indexes for multi-column filters
3. **Performance:**
- Pagination for large datasets
- Avoid SELECT *
- Use proper JOINs
Show before/after with estimated performance gain.
Advanced: Variables and File Context
Cursor can inject context into your prompts:
# Using file context
/refactor-with-tests:
description: "Refactor code and update tests"
prompt: |
You have access to:
- The selected code (to refactor)
- Test file (if open in editor)
1. Refactor the selected code to improve readability
2. Update corresponding tests to match
3. Explain what changed and why
Maintain 100% test coverage.
# Reference multiple files
/full-feature:
description: "Implement feature across component + API + test"
prompt: |
Implement this feature by modifying:
1. **Component file** (currently selected)
2. **API route** (mention filename in response)
3. **Test file** (create if missing)
Follow:
- Component: React hooks, TypeScript
- API: tRPC procedures, Zod validation
- Tests: Vitest + Testing Library
Show all three files in separate code blocks.
Verification
Test each command with real code:
# 1. Open a component file
# 2. Select a function
# 3. Run: /review
# Expected output format:
# ✅ Security: No issues found
# âš ï¸ Performance: Line 23 - useMemo needed for expensive calc
# ✅ Code quality: Follows style guide
You should see: Structured feedback matching your prompt template.
What You Learned
.cursorrulesstores custom slash-commands- Commands are YAML with
descriptionandprompt - Prompts can reference your tech stack, style guides, and patterns
- Commands work on selected code or whole files
Limitations:
- Commands don't persist across projects (need
.cursorrulesper repo) - Can't access external APIs or run shell commands
- Cursor's context window limits very large files
Bonus: Starter Template Library
Copy this into .cursorrules for instant productivity:
# === CODE QUALITY ===
/review:
description: "Comprehensive code review"
prompt: |
Review for security, performance, and best practices.
Use ✅/âš ï¸ format with line numbers.
/types:
description: "Add TypeScript types (no any)"
prompt: |
Convert to TypeScript with strict types.
Use interfaces for objects, types for unions.
/docs:
description: "Generate TSDoc/JSDoc"
prompt: |
Add documentation with @param, @returns, @example.
Include edge cases and error conditions.
# === TESTING ===
/test:
description: "Generate unit tests"
prompt: |
Create tests using our stack (specify: Vitest/Jest/etc).
Cover happy path + edge cases + errors.
/test-e2e:
description: "Generate E2E test (Playwright)"
prompt: |
Write Playwright test for this user flow.
Include: setup, actions, assertions, cleanup.
# === REFACTORING ===
/clean:
description: "Refactor for readability"
prompt: |
Improve code clarity without changing behavior.
Extract magic numbers, improve naming, reduce nesting.
/perf:
description: "Optimize performance"
prompt: |
Identify bottlenecks and optimize.
Focus on: algorithms, caching, lazy loading.
# === GENERATION ===
/component:
description: "Generate React component"
prompt: |
Create component with:
- TypeScript props interface
- Proper hooks usage
- Accessibility attributes
- Example usage in JSDoc
/api:
description: "Generate API endpoint"
prompt: |
Create API route with:
- Input validation (Zod)
- Error handling
- Response format matching our standard
- Tests
# === DEBUGGING ===
/debug:
description: "Add debug logging"
prompt: |
Add strategic console.log statements to debug.
Include: input values, intermediate states, return values.
Use meaningful labels.
/fix:
description: "Fix bug with explanation"
prompt: |
Identify the bug, explain root cause, provide fix.
Show before/after code.
Pro Tips
- Start specific: "Review React hooks" beats generic "review code"
- Include examples: Show your preferred output format in the prompt
- Reference your stack: Mention framework versions (React 19, Next.js 15)
- Iterate prompts: Refine based on Claude's actual outputs
- Share with team: Commit
.cursorrulesto your repo
Real-world timing:
- Writing first command: 5 minutes
- Testing and refining: 5 minutes
- Building command library: 30 minutes (one-time)
- Time saved: 2-3 hours per week on repeated prompts
Tested on Cursor 0.43+, Claude Sonnet 4.5, macOS & Windows