Problem: AI Gives You Wikipedia When You Need a Mentor
You paste a complex algorithm into ChatGPT or Claude and get a 500-word essay about Big O notation when you just need to understand why line 47 swaps those two variables.
You'll learn:
- The 3-prompt framework senior engineers use
- How to get explanations at the right abstraction level
- Why "explain this code" fails (and what works)
Time: 12 min | Level: Intermediate
Why This Happens
Generic prompts like "explain this algorithm" trigger AI to give textbook answers optimized for beginners who need everything. You're not a beginner—you understand loops and recursion. You need to understand this specific implementation's clever tricks.
Common symptoms:
- AI explains basic concepts you already know
- Response doesn't address the tricky part you're stuck on
- Too abstract ("this optimizes performance") or too detailed (line-by-line walkthrough)
- No connection to similar patterns you've seen before
Solution
Step 1: Set Context Like a Code Review
Don't just paste code. Tell the AI your skill level and what's confusing you.
# Instead of: "Explain this algorithm"
# Use this format:
"""
I'm debugging a production issue with Dijkstra's shortest path.
I understand graph traversal and priority queues.
What I DON'T understand: Why does this implementation update
distances AFTER popping from the queue instead of BEFORE pushing?
[paste the 30-line function here]
Walk me through the specific design choice in lines 23-27.
"""
Why this works: You've told the AI:
- Your knowledge baseline (skip the intro)
- The exact confusion point (focus here)
- What level of detail you need (specific lines)
Expected: Response targets your actual question, not Algorithm 101.
Step 2: Ask for Mental Models, Not Mechanics
Senior engineers don't memorize steps—they understand the invariant the algorithm maintains.
# Example: Binary search tree deletion
def delete_node(root, key):
if not root:
return None
if key < root.val:
root.left = delete_node(root.left, key)
elif key > root.val:
root.right = delete_node(root.right, key)
else:
# The confusing part
if not root.left:
return root.right
if not root.right:
return root.left
# Why find the minimum of right subtree?
min_node = find_min(root.right)
root.val = min_node.val
root.right = delete_node(root.right, min_node.val)
return root
Effective prompt:
Don't explain what each line does. Instead:
1. What's the invariant this deletion maintains?
2. Why does replacing with right subtree's minimum preserve BST property?
3. Draw me a 3-node example where the naive approach would break.
Assume I know recursion and tree traversal.
What you get: "The invariant is that for every node, left < node < right. When deleting a node with two children, you need the smallest value that's still larger than everything in the left subtree—that's the right subtree's minimum. Here's where naive deletion fails: [concrete example]"
Step 3: Use the Comparison Technique
Ask AI to contrast the algorithm with something you already know.
// You understand quicksort but not merge sort
const mergeSort = (arr: number[]): number[] => {
if (arr.length <= 1) return arr;
const mid = Math.floor(arr.length / 2);
const left = mergeSort(arr.slice(0, mid));
const right = mergeSort(arr.slice(mid));
return merge(left, right);
};
Powerful prompt:
I know quicksort picks a pivot and partitions in-place.
Compare merge sort to quicksort:
- Where does each do the "real work" (split vs merge)?
- Why is merge sort stable but quicksort isn't?
- Show me the memory trade-off with a 4-element array trace.
No explanations longer than 3 sentences per point.
Why this works: You're building on existing mental models instead of starting from scratch. The constraints force concise, actionable insights.
Step 4: Request Failure Cases
Understanding when an algorithm breaks is often clearer than understanding when it works.
// Greedy algorithm for coin change
func coinChange(coins []int, amount int) int {
sort.Ints(coins)
count := 0
for i := len(coins) - 1; i >= 0; i-- {
for amount >= coins[i] {
amount -= coins[i]
count++
}
}
if amount > 0 {
return -1
}
return count
}
Insight-generating prompt:
Give me 2 examples:
1. Input where greedy coin change finds optimal solution
2. Input where it fails (and what dynamic programming does differently)
Show the exact moment where the greedy choice becomes irreversible.
What you learn: "Greedy works for coins [1,5,10,25] but fails for [1,3,4] with amount=6. Greedy picks 4+1+1=3 coins. Optimal is 3+3=2 coins. The mistake happens at the first choice—once you pick 4, you can't undo it. DP considers both options at each step."
Verification
Test your understanding:
# Can you predict what this does without running it?
def mystery_algorithm(nums):
for i in range(len(nums)):
nums[i] = nums[i] ^ nums[-1]
return nums
If you understand the techniques:
- You'd ask AI: "What's the invariant after each iteration?"
- Not: "What does this code do?"
You should be able to: Explain the algorithm to a teammate in 60 seconds after reading the AI response.
Advanced: The Debugging Prompt Template
When AI-generated code doesn't work:
This [algorithm name] implementation fails on [specific input].
Expected: [output]
Got: [actual output]
I've verified:
- [assumption 1]
- [assumption 2]
My hypothesis: [your best guess]
Either confirm my hypothesis with a trace, or show me what I'm missing.
One paragraph max.
Why this works: You're collaborating with AI, not outsourcing thinking. You've done the debugging work; AI helps you over the last hurdle.
What You Learned
- Generic prompts get generic answers—specify your skill level and confusion point
- Ask for invariants and mental models, not line-by-line explanations
- Compare to algorithms you know to build on existing understanding
- Failure cases often teach more than success cases
- Constrain AI responses (max 3 sentences, 1 paragraph) to force clarity
Limitations:
- AI can hallucinate wrong explanations—verify with test cases
- Works best for well-known algorithms (obscure research papers need different approach)
- You still need to code it yourself to truly understand
Real Example: Understanding Topological Sort
Bad prompt:
Explain topological sort
Good prompt (senior engineer style):
I'm debugging a build system that resolves dependencies.
I know DFS and BFS.
Topological sort uses DFS but adds nodes to result AFTER
visiting children, not before. Why?
Give me a 4-node graph where pre-order fails but post-order works.
Then one sentence explaining the invariant.
AI response you'd get: "Post-order ensures you only add a node after processing everything that depends on it. Graph: A→B→C, A→D→C. Pre-order gives [A,B,C,D] (wrong—D before C). Post-order gives [D,C,B,A] or [C,D,B,A] (correct—C after D). Invariant: node added only when all descendants are in the result."
Result: You understand topological sort in 30 seconds instead of re-reading the Wikipedia article.
Quick Reference: Prompting Patterns
| Instead of | Use |
|---|---|
| "Explain this code" | "I understand X and Y. What I don't get is Z in lines N-M" |
| "How does this work?" | "What invariant does this maintain?" |
| "What's the algorithm?" | "Compare this to [algorithm I know]" |
| "Is this correct?" | "Here's my input/output/hypothesis. Trace it or correct me" |
| Full code dump | 15-30 lines focused on the confusing part |
Tested with Claude Sonnet 4, ChatGPT-4, and GitHub Copilot Chat on algorithm implementations from Leetcode Hard and production codebases