Problem: Cursor AI Crawls Your Monorepo Forever
You opened a large codebase in Cursor AI and the indexing spinner has been running for 20+ minutes, consuming CPU and blocking AI features.
You'll learn:
- Why Cursor indexes unnecessary files
- How to configure .cursorignore properly
- Performance tweaks that cut indexing time by 80%
Time: 12 min | Level: Beginner
Why This Happens
Cursor AI indexes your entire repository by default to understand code context. In large repos with node_modules, build artifacts, or vendored dependencies, this means scanning millions of files that aren't relevant to your code.
Common symptoms:
- Indexing takes 15+ minutes on project open
- High CPU usage even when idle
- "Indexing..." badge never completes
- AI features timeout or give generic responses
- Works fine on small projects but breaks on monorepos
Solution
Step 1: Create .cursorignore
Navigate to your project root and create the ignore file:
cd /path/to/your/project
touch .cursorignore
Expected: File appears in project root (same level as .git/)
If it fails:
- Permission denied: Run with
sudoor check folder permissions - Already exists: Open it with
code .cursorignoreinstead
Step 2: Add Common Exclusions
Open .cursorignore and add these patterns:
# Dependencies - these are never edited directly
node_modules/
.pnpm-store/
.yarn/
vendor/
venv/
__pycache__/
# Build outputs - generated code doesn't need indexing
dist/
build/
out/
target/
.next/
.nuxt/
.output/
# Large binary/media files
*.mp4
*.mov
*.jpg
*.png
*.gif
*.webp
*.pdf
*.zip
*.tar.gz
# Databases and logs
*.sqlite
*.db
*.log
logs/
# IDE and system files
.idea/
.vscode/
.DS_Store
thumbs.db
# Test coverage and reports
coverage/
.nyc_output/
junit.xml
# Lock files (they reference dependencies, not source)
package-lock.json
yarn.lock
pnpm-lock.yaml
Cargo.lock
Why this works: Cursor scans only source code, skipping generated files and dependencies. This reduces indexing from 500k+ files to ~5k actual code files in typical projects.
Step 3: Exclude Language-Specific Artifacts
Add patterns for your stack:
For Python projects:
*.pyc
.pytest_cache/
.mypy_cache/
.ruff_cache/
.eggs/
*.egg-info/
For Rust projects:
target/
Cargo.lock
*.rlib
*.rmeta
For monorepos (Nx/Turborepo):
.turbo/
.nx/cache/
apps/*/dist/
packages/*/dist/
For Next.js/React:
.next/
.vercel/
.swc/
storybook-static/
Step 4: Restart Cursor's Indexer
Force Cursor to re-index with the new rules:
# Method 1: Command palette (preferred)
# Cmd+Shift+P (Mac) / Ctrl+Shift+P (Windows)
# Type: "Cursor: Restart Indexing"
# Method 2: Restart Cursor completely
# Close all windows, then reopen project
Expected: You'll see "Indexing..." badge reappear briefly, but complete in 1-3 minutes instead of 15+.
If it fails:
- Still slow: Check if .cursorignore is in project root, not a subfolder
- Syntax errors: Use
#for comments, one pattern per line - No change: Delete
.cursor/cache folder and restart
Step 5: Monitor Indexing Progress
Check what Cursor is actually indexing:
# Open Cursor's output panel
# View > Output > Select "Cursor" from dropdown
# You should see logs like:
# "Indexed 4,832 files in 2.3s"
# "Skipped 487,291 files (ignored)"
What to look for:
- Total files indexed should be <10k for most projects
- If >50k files, add more patterns to .cursorignore
- Check "Skipped" count matches your expectations
Verification
Test the fix:
- Close and reopen your project
- Time how long indexing takes (should be <3 min)
- Use AI features immediately after indexing completes
# Quick test: ask Cursor AI about your code
# @codebase What does the authentication module do?
You should see: Fast responses with accurate code references, no timeout errors.
Advanced Optimization
Option 1: Configure Indexing Limits
Create .cursor/settings.json in project root:
{
"cursor.maxFilesToIndex": 10000,
"cursor.indexTimeout": 120000,
"cursor.excludePatterns": [
"**/*.test.ts",
"**/fixtures/**",
"**/mocks/**"
]
}
Why: Prevents runaway indexing on massive repos. Sets hard limit of 10k files and 2-minute timeout.
Option 2: Use Workspace-Specific Ignores
For multi-root workspaces, create per-folder rules:
# In workspace root
project/
├── .cursorignore # Global rules
├── backend/
│ └── .cursorignore # Backend-specific
└── frontend/
└── .cursorignore # Frontend-specific
Each subfolder's .cursorignore adds to parent rules, they don't override.
Option 3: Symlink from .gitignore
If you already maintain a comprehensive .gitignore:
# Don't do this - it excludes too much
# ln -s .gitignore .cursorignore
# Instead, cherry-pick useful patterns
grep -E "(node_modules|dist|build|\.log)" .gitignore > .cursorignore
Warning: .gitignore often excludes important config files (.env.example) that Cursor should index. Don't blindly copy it.
What You Learned
- Cursor indexes everything by default, including dependencies and build artifacts
- .cursorignore works like .gitignore to exclude paths
- Reducing indexed files from 500k to 5k cuts indexing from 20min to 2min
- Re-indexing is required after changing .cursorignore
Limitation: Some AI features need full codebase context. If Cursor's answers become too generic after adding exclusions, you've excluded too much.
When NOT to use this:
- Small projects (<1k files) don't benefit much
- If you need AI to understand third-party library internals, don't exclude vendor/
Quick Reference
Most impactful exclusions (copy-paste ready):
node_modules/
dist/
build/
.next/
target/
vendor/
*.log
coverage/
.DS_Store
Add these to .cursorignore for immediate 60-80% speedup in typical projects.
Common Patterns by Framework:
| Framework | Must Exclude |
|---|---|
| Next.js | .next/, .swc/, out/ |
| React Native | ios/build/, android/build/ |
| Rust | target/, Cargo.lock |
| Python | __pycache__/, venv/, .pytest_cache/ |
| Go | vendor/, bin/ |
| Nx Monorepo | .nx/cache/, dist/, .turbo/ |
Tested on Cursor 0.42.3, macOS Sonoma 14.6, monorepo with 450k files