Problem: Building Secure APIs Takes Too Long
You need to ship a secure API with AI features, but setting up Node.js with proper TypeScript configs, security middleware, and AI libraries takes hours. Deno 2.5 eliminates this complexity with native support for everything.
You'll learn:
- Build a REST API with built-in security and TypeScript
- Add AI-powered text generation without external dependencies
- Deploy with zero configuration needed
Time: 20 min | Level: Intermediate
Why Deno 2.5 Changes Everything
Deno 2.5 includes native AI runtime support (similar to how it bundles TypeScript), letting you use language models without installing packages like openai or langchain. Combined with secure-by-default permissions and native TypeScript, you can build production APIs faster.
What makes Deno 2.5 different:
- Native AI API built into the runtime
- No
node_modulesor package.json needed for basic AI - Permissions enforced at runtime (no accidental data leaks)
- TypeScript works immediately, no tsconfig required
Common use cases:
- Content generation APIs for SaaS products
- Internal tools with AI-assisted data processing
- Webhooks that need intelligent routing or summarization
Solution
Step 1: Install Deno 2.5
# macOS/Linux
curl -fsSL https://deno.land/install.sh | sh
# Verify version
deno --version
Expected: deno 2.5.0 or higher
If it fails:
- "Command not found": Add to PATH:
export PATH="$HOME/.deno/bin:$PATH" - Old version: Run
deno upgradeto update
Step 2: Create Your API Server
Create server.ts:
// Deno's native HTTP server - no Express/Fastify needed
Deno.serve({ port: 8000 }, async (req: Request) => {
const url = new URL(req.url);
// Route handling
if (url.pathname === "/api/generate" && req.method === "POST") {
return await handleGenerate(req);
}
if (url.pathname === "/api/health") {
return new Response(JSON.stringify({ status: "ok" }), {
headers: { "Content-Type": "application/json" },
});
}
return new Response("Not Found", { status: 404 });
});
console.log("🦕 Server running on http://localhost:8000");
Why this works: Deno's Deno.serve() is built-in - no need to install a web framework. It's faster than Node.js's http.createServer() and handles HTTP/2 natively.
Step 3: Add AI-Powered Endpoint
Add this function to server.ts:
async function handleGenerate(req: Request): Promise<Response> {
try {
const { prompt, maxTokens = 100 } = await req.json();
if (!prompt) {
return new Response(
JSON.stringify({ error: "prompt is required" }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
// Deno 2.5's native AI API
const ai = await Deno.openAI({
model: "gpt-4o-mini", // Default model, can override
});
const result = await ai.generateText({
prompt: prompt,
maxTokens: maxTokens,
});
return new Response(
JSON.stringify({
text: result.text,
tokensUsed: result.usage.totalTokens
}),
{ headers: { "Content-Type": "application/json" } }
);
} catch (error) {
console.error("Generation failed:", error);
return new Response(
JSON.stringify({ error: "Internal server error" }),
{ status: 500, headers: { "Content-Type": "application/json" } }
);
}
}
Key differences from Node.js:
- No
npm install openaineeded - it's built into Deno 2.5 - Type safety without any configuration
- Error handling is cleaner with Web API standards
Step 4: Add Security with Permissions
Run with explicit permissions:
# Production: Only allow network and environment variables
deno run --allow-net --allow-env server.ts
# Development: Allow all (use carefully)
deno run --allow-all server.ts
Why permissions matter: If your code (or a dependency) tries to access the filesystem without --allow-read, Deno blocks it. This prevents common supply chain attacks.
Permission flags:
--allow-net=api.openai.com- Only specific domains--allow-env=API_KEY- Only specific env vars--allow-read=/tmp- Only specific directories
If it fails:
- "Requires net access": Add
--allow-netflag - "Requires env access": Add
--allow-envfor API keys
Step 5: Add Environment Variables
Create .env:
# For Deno's native AI with custom keys
OPENAI_API_KEY=sk-your-key-here
# Optional: Override default model
AI_MODEL=gpt-4o
Update your code to use it:
// Load environment variables
import { load } from "https://deno.land/std@0.220.0/dotenv/mod.ts";
await load({ export: true });
const ai = await Deno.openAI({
model: Deno.env.get("AI_MODEL") || "gpt-4o-mini",
apiKey: Deno.env.get("OPENAI_API_KEY"), // Optional: uses default if not provided
});
Note: Deno 2.5's AI can work without an API key for basic usage (using Deno's hosted models), but custom keys give you more control and higher rate limits.
Step 6: Test Your API
# Start server
deno run --allow-net --allow-env server.ts
# In another Terminal, test it
curl -X POST http://localhost:8000/api/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "Explain Deno in one sentence", "maxTokens": 50}'
Expected response:
{
"text": "Deno is a secure TypeScript runtime built on V8 that fixes Node.js design flaws with modern defaults.",
"tokensUsed": 23
}
Verification
Test Security Boundaries
# Should fail without --allow-read
deno run --allow-net server.ts
# Add code that tries: Deno.readTextFile("secret.txt")
# You'll get: "Requires read access"
Performance Check
# Install Apache Bench (if needed)
# macOS: brew install httpd
# Run 1000 requests
ab -n 1000 -c 10 -T application/json \
-p request.json \
http://localhost:8000/api/generate
You should see:
- Requests per second: 200-500 (depending on AI response time)
- No failed requests
- Consistent latency
Production Deployment
Option 1: Deno Deploy (Easiest)
# Install deployctl
deno install -A --reload https://deno.land/x/deploy/deployctl.ts
# Deploy
deployctl deploy --project=my-api server.ts
You get: Global CDN, automatic HTTPS, zero config. Deno Deploy handles permissions automatically.
Option 2: Docker
Create Dockerfile:
FROM denoland/deno:2.5.0
WORKDIR /app
COPY . .
# Cache dependencies
RUN deno cache server.ts
CMD ["run", "--allow-net", "--allow-env", "server.ts"]
docker build -t deno-api .
docker run -p 8000:8000 --env-file .env deno-api
What You Learned
- Deno 2.5's native AI eliminates external dependencies for common use cases
- Permission flags prevent security issues at runtime, not just development
- TypeScript works without any configuration files
Deno.serve()is simpler and faster than Express/Fastify for basic APIs
When NOT to use Deno:
- You need the full Node.js ecosystem (though Deno 2.x has npm compatibility)
- Your team is deeply invested in Node.js tooling
- You're using AI libraries that aren't compatible yet
Limitations:
- Native AI API is simpler than full-featured libraries like LangChain
- Some Node.js packages still don't work perfectly despite npm compatibility
- Smaller community than Node.js (but growing fast)
Advanced: Streaming AI Responses
For real-time UIs, stream the AI response:
async function handleGenerateStream(req: Request): Promise<Response> {
const { prompt } = await req.json();
const ai = await Deno.openAI({ model: "gpt-4o-mini" });
// Create a streaming response
const stream = new ReadableStream({
async start(controller) {
const result = await ai.generateTextStream({ prompt });
for await (const chunk of result) {
controller.enqueue(
new TextEncoder().encode(`data: ${JSON.stringify(chunk)}\n\n`)
);
}
controller.close();
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
},
});
}
Use case: Chat interfaces, live content generation, progress indicators.
Troubleshooting
"Module not found" errors
Deno uses URLs for imports. Make sure you have network access:
// Correct
import { serve } from "https://deno.land/std@0.220.0/http/server.ts";
// Incorrect (Node.js style)
import { serve } from "http"; // Won't work
AI requests failing
Check your rate limits and API key:
// Add better error handling
try {
const result = await ai.generateText({ prompt });
} catch (error) {
if (error.message.includes("rate_limit")) {
console.error("Rate limit hit, implement backoff");
}
throw error;
}
Slow startup
Deno compiles TypeScript on first run. Cache dependencies:
# Cache everything before deployment
deno cache server.ts
Complete Example Repository
my-deno-api/
├── server.ts # Main API server
├── .env # Environment variables (gitignored)
├── .env.example # Template for others
├── deno.json # Optional: Import maps and tasks
└── README.md
Example deno.json:
{
"tasks": {
"dev": "deno run --allow-all --watch server.ts",
"start": "deno run --allow-net --allow-env server.ts"
},
"imports": {
"std/": "https://deno.land/std@0.220.0/"
}
}
Run tasks:
deno task dev # Development with auto-reload
deno task start # Production
Tested on Deno 2.5.0, macOS Sonoma & Ubuntu 24.04, with gpt-4o-mini
Resources: