Treat AI code like junior developer work—helpful, but needs guidance
Problem: AI-Generated Code Floods Your Review Queue
Your team adopted AI coding assistants and now PRs contain mixed human-AI code. You're unsure what to scrutinize, what to trust, and how to give feedback without slowing down velocity.
You'll learn:
- Which AI-generated patterns need extra review
- How to give actionable feedback on AI code
- When to accept vs. push back on AI suggestions
Time: 12 min | Level: Intermediate
Why This Is Different
AI tools (GitHub Copilot, Cursor, Claude) generate syntactically correct code that often masks deeper issues. Unlike human code, AI:
Common problems:
- Over-engineering simple solutions with unnecessary abstractions
- Missing edge cases in business logic
- Passing tests but violating domain constraints
- Copy-paste security patterns without context
Human reviewers need new heuristics because AI code looks more "correct" at first glance.
Solution
Step 1: Identify AI-Generated Sections
Look for telltale signs before reviewing:
// âš ï¸ AI signatures
- Overly generic variable names (data, result, item)
- Excessive comments explaining obvious code
- Perfect formatting but odd logic flow
- Helper functions that duplicate standard library
// ✅ Ask the author to tag it
// AI-generated (Copilot) - verified business logic
function processPayment(amount: number) { ... }
Why this matters: You review AI code differently than human code—focus on intent, not syntax.
If unclear: Ask in PR comments: "Was this AI-assisted? Want to verify the edge cases."
Step 2: Check the High-Risk Areas
AI often fails in these domains:
Security & Validation
// ⌠AI often generates this
function uploadFile(file: File) {
// Accepts any file type - AI misses threat model
return storage.save(file);
}
// ✅ What you should see
function uploadFile(file: File) {
const allowedTypes = ['image/jpeg', 'image/png'];
if (!allowedTypes.includes(file.type)) {
throw new Error('Invalid file type');
}
if (file.size > 5_000_000) { // 5MB
throw new Error('File too large');
}
return storage.save(file);
}
Review checklist:
- Input validation exists (not just type checking)
- Error messages don't leak sensitive data
- Auth checks happen before business logic
Business Logic Correctness
// ⌠AI doesn't understand domain rules
function calculateDiscount(total: number, customerType: string) {
// Generic 10% - misses "enterprise customers get 10% on orders >$1000 only"
return customerType === 'enterprise' ? total * 0.9 : total;
}
// ✅ Domain-aware logic
function calculateDiscount(total: number, customer: Customer) {
if (customer.type === 'enterprise' && total > 1000) {
return total * 0.9; // Matches finance policy
}
return total;
}
Ask yourself: Does this match our actual business rules, or generic "enterprise software" behavior?
Error Handling
// ⌠AI generates happy-path code
async function fetchUserData(id: string) {
const response = await api.get(`/users/${id}`);
return response.data; // What if 404? 500? Network error?
}
// ✅ Production-ready
async function fetchUserData(id: string) {
try {
const response = await api.get(`/users/${id}`);
return response.data;
} catch (error) {
if (error.response?.status === 404) {
throw new UserNotFoundError(id);
}
// Log to monitoring, don't expose internals
logger.error('Failed to fetch user', { id, error });
throw new Error('Unable to load user data');
}
}
Red flag: No try-catch blocks or only generic catch (error) with console.log.
Step 3: Give Constructive Feedback
AI code reviews need different phrasing than human reviews:
✅ Effective AI Code Feedback
**Security concern:** This accepts any file type. Our policy requires validation.
**Suggestion:**
- Add allowedTypes whitelist
- Limit file size to 5MB
- See upload-handler.ts lines 23-35 for our standard pattern
**Why:** AI tools don't know our threat model. This would fail penetration testing.
⌠Ineffective Feedback
"This is wrong."
"Did you even test this?"
"Obviously missing validation."
Why this works: Focus on system requirements AI couldn't know, not competence.
Step 4: When to Accept AI Code
Not all AI-generated code needs heavy scrutiny. Fast-track these:
// ✅ Low-risk AI code
- Boilerplate (React component scaffolding)
- Type definitions from API schemas
- Test fixtures with mock data
- Utility functions wrapping standard libraries (debounce, formatDate)
Approval template:
✅ LGTM - verified test coverage, business logic matches requirements.
Note for future: Consider extracting validation to middleware (DRY).
Step 5: Enforce Team Standards
Add to your PR template:
## AI Assistance Disclosure
- [ ] No AI tools used
- [ ] AI-assisted (list sections): _______
- [ ] Fully AI-generated (human-verified)
## AI Code Checklist (if applicable)
- [ ] Business logic matches domain requirements
- [ ] Security validation added (not just type safety)
- [ ] Error handling covers failure modes
- [ ] No unnecessary abstractions
Why: Makes review faster and builds trust in AI-generated code over time.
Verification
Test the PR:
# Run full test suite
npm test
# Check for common AI mistakes
npm run lint
npm run type-check
# Security scan
npm audit
You should see: All tests pass, no new vulnerabilities, business logic validated.
What You Learned
- AI code needs review focus on intent and edge cases, not syntax
- Security, business logic, and error handling are high-risk areas
- Feedback should reference system requirements, not question competence
- Team disclosure policies make AI-assisted reviews faster
Limitations: This assumes AI-generated code is disclosed. Untagged AI code is harder to review.
Based on reviewing 500+ AI-assisted PRs at companies using GitHub Copilot, Cursor, and Claude Code