Map Your Database Schema to Cursor AI in 12 Minutes

Feed your database structure to Cursor AI for accurate code generation with schema context, migrations, and query suggestions.

Problem: Cursor Generates Code That Doesn't Match Your Database

You ask Cursor to write a query or migration, but it hallucinates column names, uses the wrong types, or creates relationships that don't exist in your actual schema.

You'll learn:

  • How to extract your database schema automatically
  • Where to place schema files for Cursor's context
  • How to structure schema documentation for AI understanding
  • When to update your context files

Time: 12 min | Level: Intermediate

Why This Happens

Cursor AI only knows what you tell it. Without access to your actual database structure, it generates code based on common patterns and naming conventions—which often don't match your specific setup.

Common symptoms:

  • Generated queries reference non-existent columns
  • Wrong data types in migrations (VARCHAR instead of TEXT)
  • Missing foreign key constraints
  • Incorrect table relationships in ORM code

Solution

Step 1: Extract Your Database Schema

Choose the method for your database:

PostgreSQL:

# Export full schema with relationships
pg_dump -U your_user -d your_database --schema-only --no-owner --no-acl > schema.sql

# OR get structured JSON format
psql -U your_user -d your_database -c "\
  SELECT json_agg(row_to_json(t)) FROM (
    SELECT schemaname, tablename, tableowner 
    FROM pg_tables WHERE schemaname = 'public'
  ) t;" > tables.json

MySQL/MariaDB:

# Export schema structure
mysqldump -u your_user -p --no-data your_database > schema.sql

# OR get table descriptions
mysql -u your_user -p -e "
  SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE, IS_NULLABLE, COLUMN_KEY
  FROM INFORMATION_SCHEMA.COLUMNS
  WHERE TABLE_SCHEMA = 'your_database'
" > schema_details.txt

SQLite:

# Dump schema
sqlite3 your_database.db .schema > schema.sql

# Get detailed table info
sqlite3 your_database.db "
  SELECT sql FROM sqlite_master 
  WHERE type='table'
" > schema_details.sql

Expected: A file containing CREATE TABLE statements and constraints.

Step 2: Create a Cursor-Friendly Schema Document

Raw SQL dumps are hard for AI to parse quickly. Create a markdown summary:

# Create in your project root
touch .cursorrules-db-schema.md

Structure it like this:

# Database Schema Reference

**Last Updated:** 2026-02-13
**Database:** PostgreSQL 16.1
**ORM:** Prisma / TypeORM / Django ORM

Tables Overview

users

Primary key: id (uuid)

ColumnTypeConstraintsDescription
iduuidPRIMARY KEY, DEFAULT gen_random_uuid()User identifier
emailvarchar(255)UNIQUE, NOT NULLLogin email
created_attimestamptzNOT NULL, DEFAULT NOW()Registration time
roleenum('admin','user','guest')NOT NULL, DEFAULT 'user'Access level

Relationships:

  • users.idposts.author_id (one-to-many)
  • users.idsessions.user_id (one-to-many)

Indexes:

  • idx_users_email on email (for login lookups)
  • idx_users_created_at on created_at (for analytics)

posts

Primary key: id (serial)

ColumnTypeConstraintsDescription
idserialPRIMARY KEYAuto-increment ID
author_iduuidFOREIGN KEY → users.id, NOT NULLPost creator
titletextNOT NULLPost title
contenttextNOT NULLPost body
statusvarchar(20)CHECK IN ('draft','published','archived')Publication state
published_attimestamptzNULLWhen published

Relationships:

  • posts.author_idusers.id (many-to-one)
  • posts.idcomments.post_id (one-to-many)

Indexes:

  • idx_posts_author_id on author_id
  • idx_posts_status_published on (status, published_at) (for queries)

Common Query Patterns

-- Get user with post count
SELECT u.email, COUNT(p.id) as post_count
FROM users u
LEFT JOIN posts p ON u.id = p.author_id
WHERE p.status = 'published'
GROUP BY u.id;

-- Find recent posts
SELECT * FROM posts
WHERE status = 'published' AND published_at IS NOT NULL
ORDER BY published_at DESC
LIMIT 10;

Migration Notes

  • Always use transactions for schema changes
  • author_id column added in migration 0004_add_author_to_posts
  • Enum types defined: user_role, post_status

Why this works: Tables, types, and relationships are explicit. Cursor can see exact column names and constraints when generating code.

Step 3: Place Schema Files in Cursor's Context

Cursor looks in specific locations for context:

Option A: Project Root (Always Included)

# These files are automatically indexed
.cursorrules           # General coding rules
.cursorrules-db-schema.md  # Your schema doc
schema.sql             # Raw schema backup

Option B: Dedicated Docs Folder

mkdir -p docs/database
mv schema.sql docs/database/
mv .cursorrules-db-schema.md docs/database/schema-reference.md

Then create .cursorignore to prevent indexing large files:

# .cursorignore
node_modules/
*.log
migrations/*.sql  # Don't index every migration

Option C: Reference in .cursorrules

# .cursorrules

## Database Schema

For all database-related code, refer to:
- Full schema: `docs/database/schema.sql`
- Quick reference: `docs/database/schema-reference.md`

When writing queries:
- Use parameterized queries (no string interpolation)
- Check column names against schema before generating
- Include proper indexes in WHERE clauses

Step 4: Test Cursor's Schema Understanding

Open Cursor Composer (Cmd+K or Ctrl+K) and test:

Test 1: Simple Query

Prompt: "Write a query to get all published posts with author emails"

Expected: Cursor should generate:

SELECT p.title, p.content, u.email
FROM posts p
JOIN users u ON p.author_id = u.id
WHERE p.status = 'published'
  AND p.published_at IS NOT NULL;

Check it uses:

  • ✅ Correct column names (author_id not user_id)
  • ✅ Right enum value ('published' not 'public')
  • ✅ Proper join syntax

Test 2: Migration

Prompt: "Create a migration to add a 'slug' column to posts table"

Expected: Should generate migration matching your ORM:

-- For raw SQL
ALTER TABLE posts
ADD COLUMN slug VARCHAR(255) UNIQUE;

CREATE INDEX idx_posts_slug ON posts(slug);

If it fails:

  • Wrong column type: Update schema doc with common type mappings
  • Missing indexes: Add index strategy notes to schema reference
  • Bad constraints: Explicitly list constraint patterns in .cursorrules

Step 5: Keep Schema Context Updated

After migrations:

# Re-export schema
pg_dump -U user -d db --schema-only > schema.sql

# Update markdown reference
# Add a note about the change with migration number

Automation option:

# Add to package.json scripts
{
  "scripts": {
    "db:schema": "pg_dump -U user -d db --schema-only > schema.sql && echo 'Schema updated'",
    "db:docs": "node scripts/generate-schema-docs.js"
  }
}

Create scripts/generate-schema-docs.js:

// Reads schema.sql and generates markdown table reference
// Run after migrations to auto-update docs
const fs = require('fs');
const schema = fs.readFileSync('schema.sql', 'utf8');

// Parse CREATE TABLE statements
// Generate markdown tables
// Write to docs/database/schema-reference.md

console.log('Schema docs updated');

Verification

Test Cursor's schema awareness:

  1. Ask for a complex query:

    "Write a query to find users who have more than 5 published posts in the last month"
    
  2. Request a migration:

    "Add a tags array column to posts with GIN index"
    
  3. Generate ORM models:

    "Create a Prisma schema for the posts table"
    

You should see: Cursor uses exact column names, correct types, and proper relationships without you having to correct it.


What You Learned

  • Database schema needs explicit documentation for AI accuracy
  • Markdown tables are easier for AI to parse than raw SQL
  • .cursorrules can reference schema documentation location
  • Schema context needs updating after migrations

Limitations:

  • Very large schemas (200+ tables) may exceed context window—split into modules
  • Complex stored procedures need separate documentation
  • Dynamic schemas (runtime table creation) require different approaches

Bonus: Advanced Schema Context Strategies

For Large Schemas (100+ Tables)

Split by domain:

docs/database/
  ├── schema-auth.md        # users, sessions, tokens
  ├── schema-content.md     # posts, comments, media
  ├── schema-analytics.md   # events, metrics, reports
  └── schema-reference.md   # Overview + links

In .cursorrules:

## Database Modules

- Authentication: `docs/database/schema-auth.md`
- Content: `docs/database/schema-content.md`
- Analytics: `docs/database/schema-analytics.md`

When working with posts, include context from schema-content.md.

For Multi-Database Projects

docs/database/
  ├── postgres-schema.md    # Main application DB
  ├── redis-schema.md       # Cache keys structure
  ├── mongo-schema.md       # Document schemas
  └── data-flow.md          # How databases interact

Include Sample Data

Help Cursor understand data shape:

## Sample Data

### users table
```json
{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "email": "user@example.com",
  "role": "admin",
  "created_at": "2026-01-15T10:30:00Z"
}

posts table

{
  "id": 1,
  "author_id": "550e8400-e29b-41d4-a716-446655440000",
  "title": "Hello World",
  "status": "published",
  "published_at": "2026-02-01T14:20:00Z"
}

Common Pitfalls

❌ Don't Do This

1. Only include the raw SQL dump

❌ Just dropping schema.sql in the project

AI has to parse SQL syntax every time—slow and error-prone.

2. Outdated schema docs

❌ Schema from 6 months ago with missing columns

Cursor generates code for columns that no longer exist.

3. No relationship documentation

❌ Tables listed without foreign key info

AI can't generate proper joins or ORM relations.

4. Inconsistent naming in docs vs database

❌ Docs say "userId" but DB has "user_id"

Cursor uses wrong column names.

✅ Do This Instead

1. Structured markdown reference

✅ Markdown tables with types, constraints, relationships

2. Update schema docs in CI/CD

✅ Run schema export after every migration

3. Document relationships explicitly

✅ Show foreign keys and cardinality (one-to-many, etc.)

4. Match exact database naming

✅ Copy column names directly from schema dump

Tested with Cursor 0.42.x, PostgreSQL 16.1, MySQL 8.0, SQLite 3.45 Schema extraction scripts work on macOS, Linux, and WSL2