Write Unit Tests 5x Faster with Vitest and AI

Use Vitest's speed and AI tools to generate, run, and debug unit tests in minutes instead of hours. Real workflow included.

Problem: Writing Tests Takes Forever

You spend 30 minutes writing tests for code that took 10 minutes to build. Test setup is tedious, mocking is a pain, and Jest feels slow.

You'll learn:

  • How Vitest runs 10x faster than Jest
  • AI prompts that generate quality tests
  • A workflow to test entire modules in minutes

Time: 12 min | Level: Intermediate


Why This Happens

Traditional testing is slow because:

  1. Jest restarts Node for every test run (even in watch mode)
  2. Manual test writing means copying boilerplate for every function
  3. Mock setup requires understanding implementation details

Vitest uses Vite's dev server (stays hot) and AI can analyze your code structure instantly.

Common symptoms:

  • 5+ second test startup time
  • Copy-pasting test structure between files
  • Spending more time on mocks than actual assertions

Solution

Step 1: Install Vitest

npm install -D vitest @vitest/ui
# Add to package.json scripts
npm pkg set scripts.test="vitest"
npm pkg set scripts.test:ui="vitest --ui"

Expected: vitest command available

Why Vitest: Starts in ~100ms vs Jest's 3-5 seconds. Uses native ES modules (no transpilation).


Step 2: Set Up AI Test Generation

Use Claude, Cursor, or GitHub Copilot with this prompt template:

// Paste your function code, then use this prompt:
/*
Generate Vitest tests for the above code. Include:
- Happy path test
- Edge cases (null, undefined, empty)
- Error conditions
- Mock any external dependencies
Use describe/it syntax, expect assertions, and vi.mock()
*/

Example output from AI:

// userService.test.ts - AI generated this in 10 seconds
import { describe, it, expect, vi } from 'vitest';
import { getUserById } from './userService';
import { database } from './database';

// AI knows to mock the database
vi.mock('./database');

describe('getUserById', () => {
  it('returns user when found', async () => {
    // AI creates realistic test data
    const mockUser = { id: 1, name: 'Alice' };
    database.query.mockResolvedValue([mockUser]);
    
    const result = await getUserById(1);
    expect(result).toEqual(mockUser);
  });
  
  it('throws when user not found', async () => {
    database.query.mockResolvedValue([]);
    
    await expect(getUserById(999))
      .rejects.toThrow('User not found');
  });
  
  it('handles database errors', async () => {
    database.query.mockRejectedValue(new Error('Connection failed'));
    
    await expect(getUserById(1))
      .rejects.toThrow('Connection failed');
  });
});

If AI misses edge cases:

  • Ask: "Add tests for: [specific scenario]"
  • Example: "Add test for when user ID is negative"

Step 3: Run Tests with Instant Feedback

# Terminal 1: Watch mode (reruns on file save)
npm test

# Terminal 2: Visual UI (optional but great for debugging)
npm run test:ui

You should see:

✓ userService.test.ts (3 tests) 45ms
  ✓ getUserById > returns user when found
  ✓ getUserById > throws when user not found  
  ✓ getUserById > handles database errors

Test Files  1 passed (1)
     Tests  3 passed (3)
  Start at  10:23:15
  Duration  123ms

Why it's fast: Vitest only re-runs affected tests. Changed one function? Only those tests run.


Step 4: AI-Powered Debugging

When a test fails, feed the error to AI:

// Test fails with:
// Expected: { id: 1, name: 'Alice' }
// Received: { id: 1, name: 'Alice', createdAt: '2026-02-12' }

// Prompt to AI:
/*
This test is failing because the object has an extra field.
Fix the test to ignore createdAt or update the mock.
Code: [paste test code]
*/

AI fixes it:

it('returns user when found', async () => {
  const mockUser = { id: 1, name: 'Alice', createdAt: expect.any(String) };
  database.query.mockResolvedValue([mockUser]);
  
  const result = await getUserById(1);
  expect(result).toMatchObject({ id: 1, name: 'Alice' }); // Ignores extra fields
});

Step 5: Generate Integration Tests

For complex workflows, describe what you want:

/*
Generate an integration test for the checkout flow:
1. User adds items to cart
2. Applies discount code
3. Processes payment
4. Sends confirmation email

Mock the payment API and email service.
Use Vitest's `describe.sequential()` to run steps in order.
*/

AI generates:

import { describe, it, expect, vi, beforeEach } from 'vitest';
import { checkout } from './checkout';
import { paymentAPI } from './paymentAPI';
import { emailService } from './emailService';

vi.mock('./paymentAPI');
vi.mock('./emailService');

describe.sequential('Checkout Flow', () => {
  beforeEach(() => {
    vi.clearAllMocks();
  });

  it('completes full checkout', async () => {
    const cart = [{ id: 1, price: 100 }];
    paymentAPI.charge.mockResolvedValue({ success: true, txId: 'tx_123' });
    emailService.send.mockResolvedValue(true);
    
    const result = await checkout(cart, 'DISCOUNT10');
    
    expect(result.total).toBe(90); // 10% discount
    expect(paymentAPI.charge).toHaveBeenCalledWith(90);
    expect(emailService.send).toHaveBeenCalledWith(
      expect.objectContaining({ orderId: expect.any(String) })
    );
  });
});

Verification

# Run all tests with coverage
npm test -- --coverage

# Check coverage report
open coverage/index.html

You should see:

  • 80%+ coverage if AI generated tests for all functions
  • <1 second for test suite re-runs in watch mode
  • Green checkmarks for all edge cases

If coverage is low:

  • Ask AI: "Generate tests for uncovered lines: [paste coverage report]"

What You Learned

  • Vitest runs 10x faster than Jest (native ESM, hot reload)
  • AI generates 80% of boilerplate (mocks, assertions, edge cases)
  • Watch mode + AI debugging = instant feedback loop

Limitations:

  • AI can't test business logic it doesn't understand (you review, it writes)
  • Complex async workflows may need manual tweaking
  • Mocks still require understanding your architecture

When NOT to use this:

  • E2E tests (use Playwright instead)
  • Testing AI-generated code without review (garbage in, garbage out)

Real-World Workflow

Before (Jest + Manual):

  1. Write function (10 min)
  2. Copy test boilerplate (2 min)
  3. Write 3-4 test cases (15 min)
  4. Wait for Jest to run (5 sec × 10 iterations = 50 sec)
  5. Fix failing tests (10 min)

Total: ~37 minutes per function

After (Vitest + AI):

  1. Write function (10 min)
  2. Paste to AI with prompt (30 sec)
  3. Review generated tests (2 min)
  4. Run Vitest (instant feedback)
  5. AI debugs failures (2 min)

Total: ~15 minutes per function

Savings: 60% faster, less tedious, more thorough coverage.


Configuration Tips

// vitest.config.ts - optimal setup
import { defineConfig } from 'vitest/config';

export default defineConfig({
  test: {
    globals: true, // No need to import describe/it
    environment: 'node', // Use 'jsdom' for React
    coverage: {
      provider: 'v8', // Faster than istanbul
      reporter: ['text', 'html'],
      exclude: ['**/*.test.ts', '**/types.ts']
    },
    pool: 'threads', // Parallel test execution
    poolOptions: {
      threads: {
        singleThread: false // Use all CPU cores
      }
    }
  }
});

AI Prompts Library

For new features:

Generate Vitest tests for [function name]. Include happy path, 
edge cases (null/undefined/empty), and error handling. 
Mock [external dependency].

For refactoring:

I refactored [old code] to [new code]. Update the tests to match 
the new implementation but keep the same assertions.

For debugging:

This test fails with: [error message]. Here's the code: [paste]. 
Fix the test or explain what's wrong with the implementation.

For coverage:

Coverage report shows lines [X-Y] in [file] are uncovered. 
Generate tests that hit those branches.

Tested on Vitest 1.2.1, Node.js 22.x, with Claude/Cursor AI assistance

Feedback? Found this helpful or spotted an issue? Open an issue or improve it via PR.