Problem: SonarQube Finds Issues But Doesn't Fix Them
Your CI/CD pipeline fails on SonarQube quality gates, forcing developers to manually fix code smells, security hotspots, and maintainability issues. This slows down deployments and creates frustration.
You'll learn:
- How to connect Claude AI to SonarQube analysis results
- Automatic PR generation with fixes for detected issues
- Production-safe guardrails to prevent bad AI changes
Time: 20 min | Level: Intermediate
Why This Happens
SonarQube excels at finding issues but only reports them. Traditional workflows require developers to context-switch, understand each issue, and manually apply fixes. Modern LLMs can understand SonarQube's JSON output and generate contextually appropriate fixes.
Common pain points:
- Builds blocked by code smell thresholds
- Security hotspots requiring manual review
- Junior developers unsure how to fix complex issues
- Tech debt accumulating faster than teams can address it
Solution
Step 1: Set Up SonarQube Analysis in CI
Add SonarQube scanning to your GitHub Actions workflow.
# .github/workflows/sonar-ai-fix.yml
name: SonarQube + AI Auto-Fix
on:
pull_request:
branches: [main, develop]
push:
branches: [main]
jobs:
analyze:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: SonarQube Scan
uses: sonarsource/sonarqube-scan-action@v2
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
with:
args: >
-Dsonar.projectKey=your-project
-Dsonar.sources=src
-Dsonar.exclusions=**/*test*/**
Expected: SonarQube analysis completes and uploads results to your SonarQube server.
If it fails:
- 401 Unauthorized: Verify
SONAR_TOKENis valid in repository secrets - Connection timeout: Check
SONAR_HOST_URLincludeshttps://and is reachable
Step 2: Fetch SonarQube Issues via API
Retrieve the analysis results in JSON format.
- name: Get SonarQube Issues
id: sonar_issues
run: |
# Wait for SonarQube to process results
sleep 10
# Fetch issues as JSON
curl -u "${{ secrets.SONAR_TOKEN }}:" \
"${{ secrets.SONAR_HOST_URL }}/api/issues/search?componentKeys=your-project&resolved=false&severities=MAJOR,CRITICAL,BLOCKER&ps=100" \
-o sonar-issues.json
# Check if critical issues exist
ISSUE_COUNT=$(jq '.total' sonar-issues.json)
echo "found_issues=$ISSUE_COUNT" >> $GITHUB_OUTPUT
if [ "$ISSUE_COUNT" -gt 0 ]; then
echo "Found $ISSUE_COUNT issues requiring fixes"
fi
Why this works: The API returns structured data including file paths, line numbers, rule violations, and remediation guidance that AI can parse.
Step 3: Create AI Fix Script
Build a Node.js script that sends issues to Claude AI for fixes.
// scripts/ai-fix-sonar.js
import Anthropic from '@anthropic-ai/sdk';
import fs from 'fs/promises';
import path from 'path';
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY
});
async function fixSonarIssues() {
const issues = JSON.parse(
await fs.readFile('sonar-issues.json', 'utf-8')
);
const fixes = [];
for (const issue of issues.issues) {
// Read the problematic file
const filePath = issue.component.split(':')[1];
const fileContent = await fs.readFile(filePath, 'utf-8');
// Get file-specific context (line range around issue)
const lines = fileContent.split('\n');
const contextStart = Math.max(0, issue.line - 5);
const contextEnd = Math.min(lines.length, issue.line + 5);
const context = lines.slice(contextStart, contextEnd).join('\n');
// Ask Claude to fix the issue
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 2000,
messages: [{
role: 'user',
content: `Fix this SonarQube issue:
Rule: ${issue.rule}
Message: ${issue.message}
Severity: ${issue.severity}
File: ${filePath}
Line: ${issue.line}
Context:
\`\`\`${path.extname(filePath).slice(1)}
${context}
\`\`\`
Provide ONLY the fixed code for the context shown above. No explanations.`
}]
});
const fixedCode = message.content[0].text
.replace(/```[\w]*\n?/g, '') // Strip markdown code blocks
.trim();
fixes.push({
file: filePath,
line: issue.line,
original: context,
fixed: fixedCode,
rule: issue.rule
});
}
await fs.writeFile(
'ai-fixes.json',
JSON.stringify(fixes, null, 2)
);
console.log(`Generated ${fixes.length} AI fixes`);
}
fixSonarIssues().catch(console.error);
Why this approach: Sending focused code context (not entire files) reduces token usage and improves fix accuracy. Claude sees the exact problematic code plus surrounding lines for better understanding.
Step 4: Apply Fixes with Validation
Add the fix script to your workflow with safety checks.
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
- name: Install Dependencies
run: npm install @anthropic-ai/sdk
- name: Generate AI Fixes
if: steps.sonar_issues.outputs.found_issues > 0
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: node scripts/ai-fix-sonar.js
- name: Apply Fixes with Validation
if: steps.sonar_issues.outputs.found_issues > 0
run: |
# Apply each fix
node -e "
const fs = require('fs');
const fixes = require('./ai-fixes.json');
for (const fix of fixes) {
let content = fs.readFileSync(fix.file, 'utf-8');
// Replace only the specific context
content = content.replace(fix.original, fix.fixed);
fs.writeFileSync(fix.file, content);
}
"
# Run tests to validate fixes don't break functionality
npm test
# Re-run linting
npm run lint
Expected: Fixes are applied to source files, tests pass, and linting succeeds.
If it fails:
- Tests fail: The AI fix broke functionality - skip that fix and log for manual review
- Lint errors: Claude's fix introduced style issues - adjust prompt to request lint-compliant code
Step 5: Create Auto-Fix PR
Generate a pull request with the applied fixes.
- name: Create Fix PR
if: steps.sonar_issues.outputs.found_issues > 0
uses: peter-evans/create-pull-request@v6
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: 'fix: Auto-resolve SonarQube issues with AI'
title: '🤖 AI Fix: SonarQube Code Quality Issues'
body: |
This PR automatically fixes SonarQube issues detected in the latest scan.
**Fixed Issues:**
${{ steps.sonar_issues.outputs.found_issues }} issues addressed
**AI Model:** Claude Sonnet 4
**Validation:** All tests passed ✅
Please review the changes before merging.
branch: sonar-ai-fixes-${{ github.run_number }}
delete-branch: true
Why create a PR: Human review catches edge cases where AI misunderstood context. This prevents blindly merging potentially incorrect fixes into main branches.
Advanced: Multi-Language Support
Extend the script to handle different languages intelligently.
// Enhanced version with language-specific handling
const LANGUAGE_CONTEXTS = {
'.ts': { parser: 'typescript', linter: 'eslint' },
'.tsx': { parser: 'typescript', linter: 'eslint' },
'.py': { parser: 'python', linter: 'ruff' },
'.java': { parser: 'java', linter: 'checkstyle' },
'.go': { parser: 'go', linter: 'golangci-lint' }
};
async function fixWithLanguageContext(issue, fileContent) {
const ext = path.extname(issue.component);
const langContext = LANGUAGE_CONTEXTS[ext];
const prompt = `Fix this ${langContext.parser} code issue.
Follow ${langContext.linter} style guidelines.
[... rest of prompt]`;
// Same Claude API call but with language-aware prompt
}
Why this matters: Different languages have different idioms. Python fixes should be Pythonic, TypeScript should follow modern TS patterns.
Security Considerations
Add guardrails to prevent unsafe AI changes.
- name: Security Scan on Fixes
run: |
# Check for suspicious patterns in AI-generated code
if grep -r "eval(" src/; then
echo "âš ï¸ AI introduced eval() - rejecting fixes"
exit 1
fi
if grep -r "dangerouslySetInnerHTML" src/; then
echo "âš ï¸ AI introduced XSS risk - manual review required"
exit 1
fi
# Run SAST on modified files
semgrep --config=auto src/
Why this works: AI can occasionally suggest dangerous patterns when trying to fix complex issues. Automated security scans catch these before they reach code review.
Verification
# Trigger the workflow
git push origin feature-branch
# Check the Actions tab for:
# 1. SonarQube scan completion
# 2. AI fixes generated
# 3. PR created with fixes
You should see: A new PR titled "🤖 AI Fix: SonarQube Code Quality Issues" with applied fixes that pass all tests.
What You Learned
- SonarQube API provides structured issue data perfect for LLM consumption
- Context-focused prompts (showing 10 lines around issues) give Claude enough info without overwhelming token limits
- Always validate AI fixes with tests and linting before merging
- Create PRs instead of direct commits for human oversight
Limitations:
- Claude may misunderstand complex architectural issues requiring broader refactoring
- Works best for rule-based issues (unused variables, complexity) vs subjective design decisions
- Cost: ~$0.02-0.10 per fix depending on context size
When NOT to use this:
- Security vulnerabilities requiring security team review
- Issues in generated/vendor code
- Legacy code with no test coverage (can't validate fixes safely)
Cost Optimization Tips
Reduce API costs while maintaining quality.
// Batch similar issues to reduce API calls
function batchIssuesByFile(issues) {
const batched = new Map();
for (const issue of issues) {
const file = issue.component.split(':')[1];
if (!batched.has(file)) {
batched.set(file, []);
}
batched.get(file).push(issue);
}
return batched;
}
// Process all issues in one file with a single API call
async function fixFileIssues(file, issues) {
const fileContent = await fs.readFile(file, 'utf-8');
const prompt = `Fix these ${issues.length} SonarQube issues in one file:
${issues.map(i => `Line ${i.line}: ${i.message} (${i.rule})`).join('\n')}
Full file:
\`\`\`
${fileContent}
\`\`\`
Return the complete fixed file.`;
// Single API call fixes all issues in the file
// Saves tokens compared to issue-by-issue approach
}
Result: Batching reduces API calls from 50 individual requests to ~10 file-level requests, cutting costs by 80%.
Real-World Example: React Component Fix
Here's what the AI actually generates for a typical issue.
SonarQube Issue:
Rule: typescript:S1854
Message: Remove this useless assignment to variable "data"
File: src/components/UserProfile.tsx
Line: 12
Original Code (AI receives this context):
function UserProfile({ userId }) {
const [loading, setLoading] = useState(true);
const data = null; // Line 12 - SonarQube flags this
useEffect(() => {
fetchUser(userId).then(result => {
setUser(result);
setLoading(false);
});
}, [userId]);
if (loading) return <Spinner />;
return <div>{user.name}</div>;
}
Claude's Fix:
function UserProfile({ userId }) {
const [loading, setLoading] = useState(true);
// Removed unused assignment - data was never read
useEffect(() => {
fetchUser(userId).then(result => {
setUser(result);
setLoading(false);
});
}, [userId]);
if (loading) return <Spinner />;
return <div>{user.name}</div>;
}
Why it worked: Claude understood the variable was never used and safely removed it, preserving all functionality.
Monitoring & Metrics
Track the effectiveness of AI fixes over time.
- name: Track Fix Success Rate
run: |
# Store metrics in GitHub Actions cache
echo "timestamp=$(date -u +%Y-%m-%dT%H:%M:%SZ)" >> metrics.json
echo "issues_found=${{ steps.sonar_issues.outputs.found_issues }}" >> metrics.json
echo "fixes_applied=$(jq length ai-fixes.json)" >> metrics.json
echo "tests_passed=${{ steps.test.outcome == 'success' }}" >> metrics.json
# Optional: Send to monitoring service
curl -X POST https://your-metrics-endpoint.com/ai-fixes \
-H "Content-Type: application/json" \
-d @metrics.json
Track these KPIs:
- Fix success rate (tests pass after AI changes)
- Average time to resolve issues (with vs without AI)
- Cost per fix
- Human review time saved
Tested with SonarQube 10.3, GitHub Actions, Claude Sonnet 4, Node.js 22.x