Problem: Your Code Works Locally But Fails in Production
Your application runs perfectly on your laptop but crashes, throws errors, or behaves differently when deployed. You've checked the obvious stuff—environment variables, dependencies—but the bug persists.
You'll learn:
- How to systematically capture environment differences
- Using AI to spot non-obvious configuration mismatches
- A 3-step debugging workflow that finds the root cause
Time: 12 min | Level: Intermediate
Why This Happens
"Works on my machine" bugs stem from environmental differences: OS versions, installed libraries, default configs, file paths, timezone settings, or implicit dependencies your local setup masks.
Common symptoms:
ModuleNotFoundErrorin Docker but not locally- Different behavior in CI/CD pipelines
- Time-based bugs (dates, scheduling) that only appear in certain timezones
- Permission errors on Linux servers but not macOS
The root issue: Humans miss subtle differences. AI excels at pattern matching across large config dumps.
Solution
Step 1: Capture Your Environment State
Create comprehensive snapshots of both environments.
# Local machine snapshot
cat > local_env.txt << 'EOF'
=== System Info ===
$(uname -a)
$(python --version 2>&1 || echo "Python not found")
$(node --version 2>&1 || echo "Node not found")
=== Environment Variables ===
$(env | grep -E 'PATH|PYTHON|NODE|HOME|USER|SHELL' | sort)
=== Installed Packages ===
$(pip list 2>&1 | head -20 || echo "pip not available")
$(npm list -g --depth=0 2>&1 | head -20 || echo "npm not available")
=== Application Config ===
$(cat .env 2>&1 || echo "No .env file")
$(cat config.json 2>&1 || echo "No config.json")
$(cat package.json 2>&1 || echo "No package.json")
EOF
bash local_env.txt > local_snapshot.txt
For production/staging:
# SSH into server or use CI logs
ssh user@prod-server 'bash -s' < local_env.txt > prod_snapshot.txt
Why this works: Captures system context, dependencies, and configs in one file AI can analyze.
Step 2: Use AI to Compare Environments
Upload both snapshots to Claude or ChatGPT with this prompt:
I have a "works on my machine" bug. My app works locally but fails in production with this error:
[PASTE YOUR ERROR MESSAGE HERE]
Here's my local environment:
[PASTE local_snapshot.txt]
Here's production:
[PASTE prod_snapshot.txt]
Analyze these environments and identify:
1. Critical differences that could cause this error
2. Missing dependencies or version mismatches
3. Config values that differ between environments
4. Suggest 3 specific things to check first
Expected output: AI will highlight version differences, missing env vars, path issues, or implicit dependencies.
Pro tip: Use Claude's Artifacts feature to generate a comparison table of differences.
Step 3: Verify the AI's Hypothesis
AI will typically identify 2-4 potential causes. Test them in order of likelihood:
# Example: AI found Python version mismatch (3.11 local, 3.9 prod)
# Test on local with matching version
docker run -it python:3.9 bash
# Install your app and reproduce the error
If the first hypothesis fails:
- Ask AI: "The Python version wasn't it. What else could cause [error] given these diffs?"
- AI refines its analysis with your feedback
Common discoveries:
- Missing system libraries (e.g.,
libpq-devfor PostgreSQL) - Case-sensitive file paths (Linux vs macOS/Windows)
- Default timezone differences affecting date logic
- Implicit global packages installed locally but not in prod
Advanced: Automate Environment Comparison
Create a reusable debugging script:
#!/bin/bash
# save as debug_env.sh
echo "=== Docker Info ===" && docker --version 2>&1
echo "=== Language Versions ==="
python --version 2>&1; node --version 2>&1; go version 2>&1
echo "=== Critical Env Vars ==="
env | grep -iE 'path|lang|tz|python|node' | sort
echo "=== Disk Space ===" && df -h / 2>&1 | head -2
echo "=== Network ===" && curl -I https://pypi.org 2>&1 | head -5
echo "=== App Config ===" && ls -la .env* config* 2>&1
Usage:
# Generate snapshots with one command
./debug_env.sh > local.txt
ssh prod './debug_env.sh' > prod.txt
Verification
After applying the AI-suggested fix:
# Deploy to staging first
git push staging main
# Check logs for the same error
kubectl logs -f deployment/myapp | grep ERROR
You should see: The error disappears or changes (indicating progress).
What You Learned
- Environment bugs hide in system-level differences, not just code
- AI spots patterns humans miss in config dumps
- Systematic environment capture beats random guessing
- Use Docker to reproduce prod environments locally
Limitations:
- AI can't access your actual systems—you must provide the data
- Some bugs require multiple iterations to isolate
- Doesn't work for race conditions or load-based issues
Real-World Example
Bug: FastAPI app worked locally, returned 500 errors in AWS Lambda.
AI discovered: Local used Python 3.11's new tomllib (built-in), but Lambda runtime was Python 3.9 (required toml package).
Fix: Added toml to requirements.txt.
Time saved: 3 hours of manual debugging → 8 minutes with AI comparison.
Troubleshooting
"AI says versions match but bug persists"
- Check implicit dependencies:
pip freezevspip list - Look at library load paths:
python -c "import sys; print(sys.path)" - Compare timezone:
datecommand output
"Too much output for AI context window"
- Filter snapshots:
grep -v "^#" | grep -v "^$"(remove comments/empty lines) - Focus on error-relevant sections (if error mentions SSL, include OpenSSL version)
"Error only happens intermittently"
- This method works for deterministic bugs; for race conditions, use load testing + profiling instead
Tested with Claude Sonnet 4.5, ChatGPT-4, Docker 27.x, Python 3.9-3.12, Node.js 20-22