Problem: Your Algorithm is Slow and You Don't Know Why
Your API endpoint times out with 10k+ records, but works fine in development with 100 records. You suspect nested loops are the issue, but manually analyzing every function for Big O complexity takes hours.
You'll learn:
- How to use AI to analyze time complexity automatically
- Prompt patterns that catch hidden O(n²) operations
- When AI suggestions actually improve performance vs just look clever
Time: 20 min | Level: Intermediate
Why This Happens
Complexity issues hide in plain sight because small datasets mask the problem. An O(n²) function with 100 items runs in milliseconds, but with 10,000 items it takes minutes.
Common symptoms:
- Code works locally but times out in production
- Performance degrades non-linearly as data grows
- Nested
.filter(),.map(), or.forEach()chains - Database queries inside loops
Solution
Step 1: Set Up Your AI Analysis Environment
Create a new file to test AI-driven refactoring without touching production code.
# Create workspace
mkdir algorithm-analysis
cd algorithm-analysis
npm init -y
npm install --save-dev @types/node tsx
Expected: Clean workspace for experimenting with AI suggestions before applying them.
Step 2: Identify the Slow Function
Use this prompt with Claude or ChatGPT:
I have a function that's slow with large datasets. Analyze its time complexity and identify bottlenecks:
[paste your code here]
Respond with:
1. Current Big O complexity (best, average, worst case)
2. Specific lines causing the bottleneck
3. One concrete refactoring suggestion with code
4. The new complexity after refactoring
Example input:
function findDuplicates(users: User[]): User[] {
const duplicates: User[] = [];
for (let i = 0; i < users.length; i++) {
for (let j = i + 1; j < users.length; j++) {
if (users[i].email === users[j].email) {
duplicates.push(users[i]);
break;
}
}
}
return duplicates;
}
Why this works: The prompt forces AI to give actionable output, not just theoretical explanations. Asking for "one concrete suggestion" prevents overwhelming responses.
If it fails:
- AI gives generic advice: Add "Show actual code, not pseudocode" to your prompt
- Suggests over-engineered solutions: Add "Optimize for readability first, then performance"
Step 3: Validate the AI's Analysis
AI can hallucinate complexity. Always verify:
// AI suggested this refactor: O(n²) → O(n)
function findDuplicatesOptimized(users: User[]): User[] {
const seen = new Map<string, User>();
const duplicates: User[] = [];
// Single pass through array = O(n)
for (const user of users) {
if (seen.has(user.email)) {
duplicates.push(user);
} else {
seen.set(user.email, user);
}
}
return duplicates;
}
Test with realistic data:
import { performance } from 'perf_hooks';
function benchmark(fn: Function, data: any[], label: string) {
const start = performance.now();
fn(data);
const end = performance.now();
console.log(`${label}: ${(end - start).toFixed(2)}ms`);
}
// Generate test data
const users = Array.from({ length: 10000 }, (_, i) => ({
email: i % 100 === 0 ? 'duplicate@test.com' : `user${i}@test.com`,
name: `User ${i}`
}));
benchmark(findDuplicates, users, 'Original O(n²)');
benchmark(findDuplicatesOptimized, users, 'Optimized O(n)');
Expected output:
Original O(n²): 2847.32ms
Optimized O(n): 3.41ms
If it fails:
- Optimized version is slower: AI might've added unnecessary overhead (check Map operations)
- Results are identical: Dataset too small, increase to 50k+ items
Step 4: Catch Hidden Complexity in Chains
AI excels at spotting complexity in method chains. Use this prompt:
This code looks clean but performs poorly. Identify any hidden O(n²) operations:
const results = items
.filter(item => item.active)
.map(item => ({
...item,
relatedItems: allItems.filter(x => x.parentId === item.id)
}));
Explain why it's slow and show a single-pass alternative.
AI response example:
// ❌ Original: O(n²) - filter runs for every map iteration
const results = items
.filter(item => item.active)
.map(item => ({
...item,
relatedItems: allItems.filter(x => x.parentId === item.id)
}));
// ✅ Refactored: O(n) - build lookup table once
const itemsByParent = allItems.reduce((acc, item) => {
if (!acc[item.parentId]) acc[item.parentId] = [];
acc[item.parentId].push(item);
return acc;
}, {} as Record<string, typeof allItems>);
const results = items
.filter(item => item.active)
.map(item => ({
...item,
relatedItems: itemsByParent[item.id] || []
}));
Why this works: Pre-building a lookup table converts repeated O(n) searches into O(1) lookups.
Step 5: Handle Database Query Complexity
AI can spot N+1 query problems. Prompt:
This function causes N+1 database queries. Show how to refactor with a single query:
async function getOrdersWithItems(userIds: string[]) {
const orders = await db.orders.findMany({
where: { userId: { in: userIds } }
});
for (const order of orders) {
order.items = await db.orderItems.findMany({
where: { orderId: order.id }
});
}
return orders;
}
Use Prisma syntax for the solution.
AI response:
// ✅ Single query with includes
async function getOrdersWithItems(userIds: string[]) {
return db.orders.findMany({
where: { userId: { in: userIds } },
include: {
items: true // Joins in a single query
}
});
}
Why this works: Database round-trips dominate performance. One query with 1000 rows beats 1000 separate queries every time.
Verification
Run your benchmarks before and after refactoring:
npx tsx benchmark.ts
You should see: 10x-1000x improvement for O(n²) → O(n) refactors, 5x-50x for N+1 query fixes.
Red flags:
- Improvement less than 2x: Either dataset too small or AI suggested marginal optimization
- Code harder to read: Reject the suggestion, performance isn't worth unmaintainable code
What You Learned
- AI identifies complexity patterns humans miss in method chains
- Always benchmark AI suggestions - they sometimes add overhead
- O(n²) → O(n) improvements are real, but O(n log n) → O(n) often aren't worth the complexity
Limitations:
- AI doesn't understand your data distribution (sorted vs random affects real performance)
- Benchmarks on 10k rows don't predict behavior at 10M rows
- Memory complexity matters too - AI often trades time for space
When NOT to use AI for this:
- You need provably correct algorithms (use formal analysis)
- Performance is already acceptable (premature optimization)
- Code is called once per user session (network latency dominates)
Prompt Templates You Can Copy
General Complexity Analysis
Analyze the time and space complexity of this function:
[code]
Format:
- Time: O(?) - [why]
- Space: O(?) - [why]
- Bottleneck: [specific line numbers]
- Fix: [one concrete suggestion with code]
Database Query Optimization
This code has N+1 queries. Refactor to use eager loading:
[code]
Requirements:
- Use [Prisma/TypeORM/Sequelize] syntax
- Show before/after query counts
- Explain the join strategy
Real-World Constraints
Optimize this for production where:
- Dataset: 50k-500k items
- Memory limit: 512MB
- Must maintain readability
[code]
Show benchmark code and expected improvement %.
Tested with Claude Sonnet 4, GPT-4, Node.js 22.x, TypeScript 5.5, Prisma 5.x