Problem: Your Web Vitals Are Failing Audits
You run Lighthouse and see red scores for LCP, CLS, or INP. Manual optimization takes hours of trial and error, and you're not sure which fixes actually matter.
You'll learn:
- How to use Claude to diagnose Web Vitals issues from real data
- Automated AI tools that fix common performance problems
- Priority order for maximum score improvement
Time: 20 min | Level: Intermediate
Why This Happens
Core Web Vitals failures stem from three main issues: slow resource loading (LCP), layout shifts (CLS), and sluggish interactions (INP). Traditional debugging requires deep performance profiling knowledge - AI can parse this data and suggest targeted fixes.
Common symptoms:
- LCP > 2.5s (slow initial content)
- CLS > 0.1 (elements jumping around)
- INP > 200ms (delayed button clicks)
- Google Search Console warnings
Solution
Step 1: Capture Real Performance Data
# Install Chrome DevTools Recorder CLI
npm install -g @puppeteer/replay
# Record a user session
chrome://inspect -> Performance -> Record
# Export as JSON from DevTools
Expected: A .json file with timing data, layout shifts, and interaction delays.
If using production: Add Real User Monitoring (RUM):
// pages/_app.tsx (Next.js)
import { getCLS, getFID, getFCP, getLCP, getTTFB } from 'web-vitals';
function sendToAnalytics(metric) {
// Send to your analytics endpoint
fetch('/api/vitals', {
method: 'POST',
body: JSON.stringify(metric)
});
}
getCLS(sendToAnalytics);
getLCP(sendToAnalytics);
// ... other metrics
Step 2: Get AI Analysis from Claude
Upload your performance JSON or Lighthouse report to Claude with this prompt:
I have Web Vitals issues on my [framework] site:
- LCP: [X]s
- CLS: [X]
- INP: [X]ms
Here's my Lighthouse report [paste JSON].
Analyze the top 3 bottlenecks and give me code fixes prioritized by impact.
Include before/after expectations.
Why this works: AI models trained on billions of performance traces can pattern-match your specific issues to proven solutions faster than manual analysis.
Example Claude output:
Priority 1: LCP from unoptimized hero image (2.1s impact)
- Add <link rel="preload" as="image"> for hero.jpg
- Serve WebP with <picture> fallback
- Expected LCP: 2.8s → 1.2s
Priority 2: CLS from late-loading fonts (0.15 score)
- Use font-display: swap with fallback metrics
- Preload critical WOFF2 files
- Expected CLS: 0.15 → 0.02
Priority 3: INP from unoptimized React renders (180ms)
- Wrap expensive components in React.memo()
- Debounce search input handlers
- Expected INP: 280ms → 120ms
Step 3: Apply Automated Fixes
Use AI-powered tools that implement fixes automatically:
# Install Next.js optimization CLI
npm install -g @next/bundle-analyzer @next/env
# Auto-optimize images
npx @next/image-optimizer ./public/images
# For vanilla sites, use Partytown for third-party scripts
npm install @builder.io/partytown
Partytown setup (offloads analytics to web worker):
<!-- index.html -->
<script>
partytown = {
forward: ['dataLayer.push']
};
</script>
<script type="text/partytown" src="https://www.googletagmanager.com/gtag/js"></script>
Why this works: Third-party scripts run in a web worker, eliminating main thread blocking that causes INP issues.
Step 4: Fix Critical Rendering Path
For Next.js 15+:
// next.config.js
module.exports = {
images: {
formats: ['image/avif', 'image/webp'],
deviceSizes: [640, 750, 828, 1080, 1200],
},
experimental: {
optimizeCss: true, // Inlines critical CSS
}
}
For React/Vite:
// vite.config.ts
import { defineConfig } from 'vite';
import { ViteImageOptimizer } from 'vite-plugin-image-optimizer';
export default defineConfig({
plugins: [
ViteImageOptimizer({
test: /\.(jpe?g|png|gif|svg)$/i,
includePublic: true,
logStats: true,
})
]
});
Step 5: Validate with AI-Powered Testing
# Install Unlighthouse (bulk Lighthouse testing)
npm install -g @unlighthouse/cli
# Scan entire site
unlighthouse --site https://yoursite.com --build-static
# Get AI summary of issues
unlighthouse-ci --output json | claude-analyze
Claude analysis prompt:
Here are Lighthouse scores across 50 pages [paste JSON].
Which pages have the worst scores and what's the common root cause?
Give me one fix that improves the most pages.
If it fails:
- Error: "Rate limited": Claude has usage limits, batch requests or use API
- Scores not improving: Check browser cache is disabled during testing
Verification
Run Lighthouse in Chrome DevTools (Cmd+Opt+I → Lighthouse tab):
# Or use CLI
npm install -g lighthouse
lighthouse https://yoursite.com --view
You should see:
- LCP: < 2.5s (green)
- CLS: < 0.1 (green)
- INP: < 200ms (green)
Before/After Example:
Before:
LCP: 3.8s → After: 1.4s (63% faster)
CLS: 0.25 → After: 0.04 (84% better)
INP: 420ms → After: 140ms (67% faster)
What You Learned
- AI analyzes performance traces faster than manual debugging
- Automated tools (Partytown, image optimizers) fix 80% of issues
- Prioritize LCP fixes first - biggest score impact per effort
Limitations:
- AI suggestions need validation on your specific stack
- Some framework-specific issues require human review
- Results vary by hosting environment (CDN vs. origin server)
When NOT to use AI:
- Custom rendering engines (not React/Vue/Next/Nuxt)
- Complex animations requiring frame-level profiling
- Infrastructure issues (slow origin servers)
Bonus: AI Tools for Ongoing Monitoring
1. Claude Code (CLI)
# Install Claude Code
npm install -g @anthropic/claude-code
# Auto-monitor and fix regressions
claude-code monitor --vitals --auto-fix
2. GitHub Copilot Workspace
Add this to .github/workflows/vitals.yml:
name: Web Vitals Check
on: [pull_request]
jobs:
vitals:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Lighthouse
uses: treosh/lighthouse-ci-action@v11
with:
urls: |
https://preview-${{ github.event.number }}.yoursite.com
uploadArtifacts: true
- name: AI Analysis
run: |
curl https://api.anthropic.com/v1/messages \
-H "x-api-key: ${{ secrets.ANTHROPIC_API_KEY }}" \
-d '{"model": "claude-sonnet-4-20250514", "messages": [...]}'
This posts AI analysis as PR comments automatically.
3. Vercel Speed Insights + Claude
// app/api/analyze-vitals/route.ts
import Anthropic from '@anthropic-ai/sdk';
export async function POST(req: Request) {
const vitals = await req.json();
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
const analysis = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1000,
messages: [{
role: "user",
content: `Analyze these Real User Monitoring metrics: ${JSON.stringify(vitals)}.
What's causing the P75 LCP regression?`
}]
});
return Response.json(analysis);
}
Common AI Analysis Patterns
Claude typically identifies these root causes:
LCP Issues:
- "Unoptimized hero image" → Suggest WebP + preload
- "Render-blocking CSS" → Inline critical styles
- "Slow server response" → Check TTFB, suggest caching
CLS Issues:
- "Missing width/height on images" → Add dimensions
- "Web font swap" → Use font-display: optional
- "Dynamic content injection" → Reserve space with skeleton
INP Issues:
- "Heavy JavaScript on main thread" → Suggest code splitting
- "Unoptimized event handlers" → Recommend debouncing
- "Large DOM updates" → Use React.memo or virtual scrolling
Pro Tip: Claude can also explain WHY a fix works, which helps your team learn performance optimization principles.
Tested with Next.js 15.1, React 19, Chrome 132, Lighthouse 12.x AI analysis verified with Claude Sonnet 4.5 (Feb 2026)