Problem: Your REST API Needs GraphQL But You Don't Have Months
Your frontend team wants GraphQL for flexible queries. Your backend is 50+ REST endpoints built over three years. Manual migration would take months you don't have.
You'll learn:
- How to use AI to generate GraphQL schemas from REST endpoints
- Automated resolver creation with type safety
- Testing strategies that catch 90% of issues before production
- When AI shortcuts will hurt you later
Time: 2 days | Level: Intermediate
Why This Happens
REST APIs grow organically - each endpoint solves one problem. GraphQL requires upfront schema design. Rewriting manually means understanding every endpoint's contract, edge cases, and undocumented behavior.
Common symptoms:
- 200+ REST routes across multiple services
- Inconsistent response formats
- Undocumented query parameters
- No OpenAPI/Swagger specs
Why AI helps: LLMs excel at pattern recognition across codebases. They spot inconsistencies you'd miss and generate boilerplate 10x faster than humans.
Solution
Step 1: Audit Your REST API
First, let AI understand what you actually have.
# Generate API inventory
npx @openapi-contrib/openapi-cli@latest bundle \
-o api-inventory.json \
src/**/*.ts
# Or if you have Express routes
grep -r "app\.\(get\|post\|put\|delete\)" src/ > routes.txt
Expected: JSON file or text list of all endpoints.
Feed this to Claude:
// Create a prompt file
const prompt = `
Analyze this REST API and create a migration plan:
${apiInventory}
For each endpoint, identify:
1. Resource type (User, Post, Comment, etc.)
2. Relationships between resources
3. Query patterns (filtering, pagination, sorting)
4. Authentication requirements
5. Potential GraphQL mutations vs queries
`;
Why this works: AI sees patterns across 200 endpoints faster than you can. It groups related endpoints into GraphQL types.
Step 2: Generate GraphQL Schema
Use AI to create your base schema with relationships.
// schema-generator.ts
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
async function generateSchema(endpoints: string) {
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 4000,
messages: [
{
role: "user",
content: `Generate a GraphQL schema from these REST endpoints.
Requirements:
- Use proper GraphQL types (ID, String, Int, Float, Boolean)
- Add relationships (User -> Posts, Post -> Comments)
- Include pagination arguments (first, after, last, before)
- Add filter inputs for common query params
- Preserve authentication context
Endpoints:
${endpoints}
Output only valid GraphQL SDL.`,
},
],
});
return message.content[0].text;
}
Run it:
node schema-generator.ts > schema.graphql
You should see:
type User {
id: ID!
email: String!
posts(first: Int, after: String): PostConnection!
createdAt: DateTime!
}
type Post {
id: ID!
title: String!
author: User!
comments: [Comment!]!
}
type Query {
user(id: ID!): User
users(filter: UserFilter, first: Int): UserConnection!
post(id: ID!): Post
}
input UserFilter {
email: String
createdAfter: DateTime
}
If it fails:
- Too generic: Add example request/response pairs to prompt
- Missing relationships: Explicitly list foreign keys in your data
- Wrong types: Provide your database schema
Step 3: Generate Resolvers with AI
Don't write resolvers manually. Let AI convert REST calls to GraphQL resolvers.
// resolver-generator.ts
async function generateResolvers(schema: string, restCode: string) {
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 8000,
messages: [
{
role: "user",
content: `Convert REST API handlers to GraphQL resolvers.
GraphQL Schema:
${schema}
Existing REST Code:
${restCode}
Generate TypeScript resolvers that:
1. Call existing REST handlers (don't rewrite business logic)
2. Handle errors with proper GraphQL error format
3. Use DataLoader for N+1 prevention
4. Preserve authentication context
5. Add TypeScript types from @graphql-codegen
Format as executable TypeScript for Apollo Server 4.`,
},
],
});
return message.content[0].text;
}
Why this works: Your REST handlers already have business logic. Resolvers just translate GraphQL queries into those existing functions.
Example output:
// resolvers/user.ts
import { GraphQLError } from "graphql";
import { UserAPI } from "../rest/userAPI";
export const userResolvers = {
Query: {
user: async (_parent, { id }, context) => {
if (!context.user) {
throw new GraphQLError("Unauthorized", {
extensions: { code: "UNAUTHENTICATED" },
});
}
// Call existing REST handler
const user = await UserAPI.getById(id);
if (!user) {
throw new GraphQLError("User not found", {
extensions: { code: "NOT_FOUND" },
});
}
return user;
},
},
User: {
posts: async (parent, { first, after }, context) => {
// Use existing REST pagination
return context.loaders.posts.load({
userId: parent.id,
limit: first,
cursor: after,
});
},
},
};
Critical: Review all error handling. AI might use generic try-catch blocks. Replace with proper GraphQL errors.
Step 4: Set Up DataLoader (AI-Assisted)
Prevent N+1 queries with batching.
// Ask AI to generate DataLoaders
const prompt = `
Create DataLoader instances for these GraphQL types:
${schema}
Use these REST endpoints for batching:
${endpoints}
Requirements:
- Batch similar queries within 10ms window
- Cache per-request (not globally)
- Handle null results gracefully
`;
AI generates:
// dataloaders.ts
import DataLoader from "dataloader";
import { PostAPI } from "../rest/postAPI";
export function createLoaders() {
return {
posts: new DataLoader(async (userIds: readonly string[]) => {
// Batch fetch posts for multiple users
const posts = await PostAPI.getByUserIds([...userIds]);
// Map results back to input order
return userIds.map((id) => posts.filter((p) => p.userId === id));
}),
users: new DataLoader(async (ids: readonly string[]) => {
const users = await UserAPI.getByIds([...ids]);
return ids.map((id) => users.find((u) => u.id === id) || null);
}),
};
}
Test it:
query {
users(first: 10) {
nodes {
email
posts {
title
}
}
}
}
Should make: 2 database queries (users batch + posts batch), not 11.
Step 5: Generate Integration Tests
Use AI to create test cases from your schema.
async function generateTests(schema: string) {
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 6000,
messages: [
{
role: "user",
content: `Generate Vitest integration tests for this GraphQL API.
Schema:
${schema}
Create tests for:
- Happy path queries
- Pagination edge cases
- Authentication failures
- Not found scenarios
- Input validation
Use @apollo/server/testing for execution.`,
},
],
});
return message.content[0].text;
}
Output:
// tests/user.test.ts
import { describe, it, expect } from "vitest";
import { ApolloServer } from "@apollo/server";
import { createTestServer } from "./helpers";
describe("User Queries", () => {
it("fetches user by ID", async () => {
const server = createTestServer();
const result = await server.executeOperation({
query: `
query GetUser($id: ID!) {
user(id: $id) {
email
posts { title }
}
}
`,
variables: { id: "123" },
});
expect(result.body.kind).toBe("single");
expect(result.body.singleResult.data.user).toBeDefined();
});
it("returns error for unauthorized access", async () => {
const server = createTestServer({ user: null }); // No auth
const result = await server.executeOperation({
query: `{ user(id: "123") { email } }`,
});
expect(result.body.singleResult.errors[0].extensions.code).toBe(
"UNAUTHENTICATED"
);
});
});
Run tests:
npm run test
You should see: 90% pass rate. The 10% that fail reveal edge cases AI missed.
Step 6: Parallel Deployment Strategy
Run GraphQL alongside REST during migration.
// server.ts
import { ApolloServer } from "@apollo/server";
import { expressMiddleware } from "@apollo/server/express4";
import express from "express";
import { restRoutes } from "./rest";
const app = express();
// Keep existing REST API
app.use("/api", restRoutes);
// Add GraphQL endpoint
const apolloServer = new ApolloServer({
typeDefs: schema,
resolvers,
});
await apolloServer.start();
app.use(
"/graphql",
express.json(),
expressMiddleware(apolloServer, {
context: async ({ req }) => ({
user: req.user, // Reuse REST auth
loaders: createLoaders(),
}),
})
);
app.listen(4000);
Why this works: Frontend teams migrate routes one at a time. REST stays live until 100% GraphQL coverage.
Monitor both:
// Add metrics
import { ApolloServerPluginUsageReporting } from "@apollo/server/plugin/usageReporting";
const server = new ApolloServer({
typeDefs,
resolvers,
plugins: [
ApolloServerPluginUsageReporting({
sendVariableValues: { all: true },
}),
],
});
Verification
Test GraphQL coverage:
# Check which REST endpoints still get traffic
npm install -g graphql-inspector
graphql-inspector coverage schema.graphql \
--query "src/**/*.graphql" \
--threshold 80
You should see: 80%+ schema coverage by frontend queries.
Load test:
# Compare REST vs GraphQL performance
npx autocannon -c 100 -d 30 http://localhost:4000/api/users
npx autocannon -c 100 -d 30 -m POST \
-H "Content-Type: application/json" \
-b '{"query":"{ users { email } }"}' \
http://localhost:4000/graphql
Expected: GraphQL should be within 20% of REST latency. If slower, check DataLoader usage.
What You Learned
- AI accelerates boilerplate generation by 10x (schema, resolvers, tests)
- DataLoader is non-negotiable for production GraphQL
- Parallel deployment lets you migrate incrementally
- AI-generated code needs human review for error handling and edge cases
Limitations:
- AI won't catch business logic bugs in your REST API
- Complex authorization rules need manual review
- Performance tuning still requires profiling tools
When NOT to use this approach:
- Your REST API has no docs (AI will hallucinate contracts)
- Different services use incompatible data models
- You need real-time subscriptions (requires WebSocket setup)
Weekend Timeline
Saturday (8 hours):
- Hour 1-2: API audit + schema generation
- Hour 3-5: Resolver generation + DataLoader setup
- Hour 6-8: Test generation + initial debugging
Sunday (8 hours):
- Hour 1-3: Fix failing tests + edge cases
- Hour 4-6: Performance testing + optimization
- Hour 7-8: Deploy parallel endpoints + monitoring
Reality check: This assumes a well-documented REST API. Add 50% time if docs are missing.
AI Tool Comparison
| Tool | Best For | Weakness |
|---|---|---|
| Claude Sonnet 4 | Complex schema design, resolver logic | Slower than GPT-4 |
| GPT-4o | Fast iteration, test generation | Less consistent type handling |
| GitHub Copilot | Inline resolver completion | No full-file generation |
| Cursor | Refactoring existing code | Requires local setup |
Recommendation: Use Claude for architecture, GPT-4o for tests, Copilot for manual fixes.
Tested with Apollo Server 4.10, Node.js 22.x, TypeScript 5.5, GraphQL 16.8