Picture this: You're building a web application and suddenly realize you need AI features, but ChatGPT's API costs are eating your budget faster than a developer consumes coffee. Enter Ollama - the local AI solution that runs on your server without sending data to external APIs or charging per token.
This tutorial shows you how to integrate Ollama AI models into your Laravel application with practical PHP code examples, complete API setup, and deployment strategies.
What You'll Learn
- Install and configure Ollama with Laravel
- Build AI-powered features using PHP
- Create custom AI endpoints and controllers
- Handle streaming responses and error management
- Deploy your AI-integrated application
Prerequisites and Setup Requirements
Before diving into Laravel Ollama integration, ensure your development environment meets these requirements:
- PHP 8.1 or higher
- Laravel 9.0 or newer
- Composer for dependency management
- At least 4GB RAM (8GB recommended for larger models)
- Linux, macOS, or Windows with WSL2
Installing Ollama on Your Development Server
Ollama installation varies by operating system. Here's the streamlined approach for each platform:
Linux Installation
# Download and install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama service
sudo systemctl start ollama
sudo systemctl enable ollama
# Verify installation
ollama --version
macOS Installation
# Install using Homebrew
brew install ollama
# Or download from official website
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama
ollama serve
Windows Installation
Download the Ollama installer from the official website or use Windows Subsystem for Linux (WSL2) with the Linux installation steps.
Downloading and Managing AI Models
Ollama supports various AI models. Start with lightweight options for development:
# Download Llama 2 (7B parameters - good for development)
ollama pull llama2
# Download Mistral (7B parameters - fast and efficient)
ollama pull mistral
# Download Code Llama (for code generation)
ollama pull codellama
# List installed models
ollama list
# Test model interaction
ollama run llama2 "Hello, how are you?"
Creating Your Laravel Application Structure
Generate a new Laravel project or use an existing one:
# Create new Laravel project
composer create-project laravel/laravel laravel-ollama-app
# Navigate to project directory
cd laravel-ollama-app
# Install HTTP client for API calls
composer require guzzlehttp/guzzle
Building the Ollama Service Class
Create a dedicated service class to handle Ollama API communications:
# Generate service class
php artisan make:class Services/OllamaService
<?php
// app/Services/OllamaService.php
namespace App\Services;
use GuzzleHttp\Client;
use GuzzleHttp\Exception\RequestException;
use Illuminate\Support\Facades\Log;
class OllamaService
{
protected $client;
protected $baseUrl;
public function __construct()
{
$this->client = new Client([
'timeout' => 120, // 2 minutes timeout for AI responses
'connect_timeout' => 10,
]);
$this->baseUrl = config('ollama.base_url', 'http://localhost:11434');
}
/**
* Generate text using specified model
*/
public function generate(string $model, string $prompt, array $options = []): array
{
try {
$response = $this->client->post($this->baseUrl . '/api/generate', [
'json' => [
'model' => $model,
'prompt' => $prompt,
'stream' => false,
'options' => $options
],
'headers' => [
'Content-Type' => 'application/json',
]
]);
$body = json_decode($response->getBody()->getContents(), true);
return [
'success' => true,
'response' => $body['response'] ?? '',
'model' => $body['model'] ?? $model,
'total_duration' => $body['total_duration'] ?? 0,
];
} catch (RequestException $e) {
Log::error('Ollama API Error: ' . $e->getMessage());
return [
'success' => false,
'error' => 'Failed to generate response: ' . $e->getMessage(),
'response' => '',
];
}
}
/**
* Stream responses for real-time interaction
*/
public function generateStream(string $model, string $prompt, callable $callback = null): \Generator
{
try {
$response = $this->client->post($this->baseUrl . '/api/generate', [
'json' => [
'model' => $model,
'prompt' => $prompt,
'stream' => true,
],
'stream' => true,
]);
$body = $response->getBody();
while (!$body->eof()) {
$line = $body->read(1024);
if (!empty($line)) {
$data = json_decode($line, true);
if ($data && isset($data['response'])) {
if ($callback) {
$callback($data['response']);
}
yield $data['response'];
}
}
}
} catch (RequestException $e) {
Log::error('Ollama Streaming Error: ' . $e->getMessage());
yield 'Error: ' . $e->getMessage();
}
}
/**
* Get available models
*/
public function getModels(): array
{
try {
$response = $this->client->get($this->baseUrl . '/api/tags');
$body = json_decode($response->getBody()->getContents(), true);
return $body['models'] ?? [];
} catch (RequestException $e) {
Log::error('Failed to fetch models: ' . $e->getMessage());
return [];
}
}
/**
* Check if Ollama service is running
*/
public function isHealthy(): bool
{
try {
$response = $this->client->get($this->baseUrl . '/api/tags', [
'timeout' => 5,
]);
return $response->getStatusCode() === 200;
} catch (RequestException $e) {
return false;
}
}
}
Configuring Laravel for Ollama Integration
Add Ollama configuration to your Laravel application:
<?php
// config/ollama.php
return [
'base_url' => env('OLLAMA_BASE_URL', 'http://localhost:11434'),
'default_model' => env('OLLAMA_DEFAULT_MODEL', 'llama2'),
'timeout' => env('OLLAMA_TIMEOUT', 120),
'max_tokens' => env('OLLAMA_MAX_TOKENS', 2048),
'temperature' => env('OLLAMA_TEMPERATURE', 0.7),
];
Update your .env file:
# .env
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_DEFAULT_MODEL=llama2
OLLAMA_TIMEOUT=120
OLLAMA_MAX_TOKENS=2048
OLLAMA_TEMPERATURE=0.7
Creating AI-Powered Controllers
Build controllers that utilize your Ollama service:
# Generate controller
php artisan make:controller AiChatController
<?php
// app/Http/Controllers/AiChatController.php
namespace App\Http\Controllers;
use App\Services\OllamaService;
use Illuminate\Http\Request;
use Illuminate\Http\JsonResponse;
use Illuminate\Support\Facades\Validator;
class AiChatController extends Controller
{
protected $ollama;
public function __construct(OllamaService $ollama)
{
$this->ollama = $ollama;
}
/**
* Display chat interface
*/
public function index()
{
$models = $this->ollama->getModels();
$isHealthy = $this->ollama->isHealthy();
return view('ai.chat', compact('models', 'isHealthy'));
}
/**
* Process chat message
*/
public function chat(Request $request): JsonResponse
{
$validator = Validator::make($request->all(), [
'message' => 'required|string|max:2000',
'model' => 'required|string',
]);
if ($validator->fails()) {
return response()->json([
'success' => false,
'errors' => $validator->errors()
], 422);
}
$message = $request->input('message');
$model = $request->input('model', config('ollama.default_model'));
// Add context or system prompt
$prompt = "You are a helpful AI assistant. Please respond to: " . $message;
$result = $this->ollama->generate($model, $prompt, [
'temperature' => config('ollama.temperature'),
'max_tokens' => config('ollama.max_tokens'),
]);
return response()->json($result);
}
/**
* Stream chat responses
*/
public function chatStream(Request $request)
{
$validator = Validator::make($request->all(), [
'message' => 'required|string|max:2000',
'model' => 'required|string',
]);
if ($validator->fails()) {
return response()->json([
'success' => false,
'errors' => $validator->errors()
], 422);
}
$message = $request->input('message');
$model = $request->input('model', config('ollama.default_model'));
$prompt = "You are a helpful AI assistant. Please respond to: " . $message;
return response()->stream(function () use ($model, $prompt) {
echo "data: " . json_encode(['type' => 'start']) . "\n\n";
foreach ($this->ollama->generateStream($model, $prompt) as $chunk) {
echo "data: " . json_encode([
'type' => 'chunk',
'content' => $chunk
]) . "\n\n";
if (ob_get_level()) {
ob_flush();
}
flush();
}
echo "data: " . json_encode(['type' => 'end']) . "\n\n";
}, 200, [
'Content-Type' => 'text/plain',
'Cache-Control' => 'no-cache',
'Connection' => 'keep-alive',
]);
}
/**
* Get available models
*/
public function models(): JsonResponse
{
$models = $this->ollama->getModels();
return response()->json([
'success' => true,
'models' => $models,
]);
}
/**
* Health check endpoint
*/
public function health(): JsonResponse
{
$isHealthy = $this->ollama->isHealthy();
return response()->json([
'success' => true,
'healthy' => $isHealthy,
'timestamp' => now()->toISOString(),
]);
}
}
Setting Up Routes and Middleware
Configure routes for your AI endpoints:
<?php
// routes/web.php
use App\Http\Controllers\AiChatController;
use Illuminate\Support\Facades\Route;
Route::prefix('ai')->group(function () {
Route::get('/', [AiChatController::class, 'index'])->name('ai.chat');
Route::post('/chat', [AiChatController::class, 'chat'])->name('ai.chat.send');
Route::post('/chat/stream', [AiChatController::class, 'chatStream'])->name('ai.chat.stream');
Route::get('/models', [AiChatController::class, 'models'])->name('ai.models');
Route::get('/health', [AiChatController::class, 'health'])->name('ai.health');
});
Add API routes for external integrations:
<?php
// routes/api.php
use App\Http\Controllers\AiChatController;
use Illuminate\Support\Facades\Route;
Route::prefix('ai')->middleware('throttle:60,1')->group(function () {
Route::post('/generate', [AiChatController::class, 'chat']);
Route::get('/models', [AiChatController::class, 'models']);
Route::get('/health', [AiChatController::class, 'health']);
});
Building the Frontend Interface
Create a responsive chat interface using Blade templates:
{{-- resources/views/ai/chat.blade.php --}}
@extends('layouts.app')
@section('title', 'AI Chat Assistant')
@section('content')
<div class="container mx-auto px-4 py-8">
<div class="max-w-4xl mx-auto">
<div class="bg-white rounded-lg shadow-lg overflow-hidden">
<!-- Header -->
<div class="bg-blue-600 text-white p-4">
<h1 class="text-2xl font-bold">AI Chat Assistant</h1>
<div class="flex items-center mt-2">
<span class="inline-block w-3 h-3 bg-green-400 rounded-full mr-2 {{ $isHealthy ? 'bg-green-400' : 'bg-red-400' }}"></span>
<span class="text-sm">{{ $isHealthy ? 'Ollama Connected' : 'Ollama Disconnected' }}</span>
</div>
</div>
<!-- Model Selection -->
<div class="p-4 border-b">
<label for="model-select" class="block text-sm font-medium text-gray-700 mb-2">
Select AI Model:
</label>
<select id="model-select" class="w-full md:w-64 p-2 border border-gray-300 rounded-md">
@foreach($models as $model)
<option value="{{ $model['name'] }}">{{ $model['name'] }} ({{ $model['size'] }})</option>
@endforeach
</select>
</div>
<!-- Chat Messages -->
<div id="chat-messages" class="h-96 overflow-y-auto p-4 space-y-4">
<div class="flex justify-center">
<div class="bg-gray-100 rounded-lg p-3 text-sm text-gray-600">
Start a conversation with your AI assistant
</div>
</div>
</div>
<!-- Input Form -->
<div class="p-4 border-t">
<form id="chat-form" class="flex space-x-2">
<input
type="text"
id="message-input"
placeholder="Type your message..."
class="flex-1 p-3 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500"
maxlength="2000"
>
<button
type="submit"
id="send-button"
class="bg-blue-600 text-white px-6 py-3 rounded-md hover:bg-blue-700 disabled:opacity-50 disabled:cursor-not-allowed"
>
Send
</button>
</form>
</div>
</div>
</div>
</div>
<script>
document.addEventListener('DOMContentLoaded', function() {
const chatForm = document.getElementById('chat-form');
const messageInput = document.getElementById('message-input');
const sendButton = document.getElementById('send-button');
const chatMessages = document.getElementById('chat-messages');
const modelSelect = document.getElementById('model-select');
// Add message to chat
function addMessage(content, isUser = false) {
const messageDiv = document.createElement('div');
messageDiv.className = `flex ${isUser ? 'justify-end' : 'justify-start'}`;
messageDiv.innerHTML = `
<div class="max-w-xs lg:max-w-md px-4 py-2 rounded-lg ${
isUser
? 'bg-blue-600 text-white'
: 'bg-gray-100 text-gray-800'
}">
${content}
</div>
`;
chatMessages.appendChild(messageDiv);
chatMessages.scrollTop = chatMessages.scrollHeight;
}
// Handle form submission
chatForm.addEventListener('submit', async function(e) {
e.preventDefault();
const message = messageInput.value.trim();
if (!message) return;
const selectedModel = modelSelect.value;
// Add user message
addMessage(message, true);
messageInput.value = '';
// Disable form
sendButton.disabled = true;
sendButton.textContent = 'Sending...';
try {
const response = await fetch('{{ route("ai.chat.send") }}', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').getAttribute('content')
},
body: JSON.stringify({
message: message,
model: selectedModel
})
});
const data = await response.json();
if (data.success) {
addMessage(data.response);
} else {
addMessage('Error: ' + (data.error || 'Failed to get response'));
}
} catch (error) {
console.error('Error:', error);
addMessage('Error: Failed to send message');
} finally {
sendButton.disabled = false;
sendButton.textContent = 'Send';
}
});
// Handle Enter key
messageInput.addEventListener('keypress', function(e) {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
chatForm.dispatchEvent(new Event('submit'));
}
});
});
</script>
@endsection
Error Handling and Logging
Implement comprehensive error handling:
<?php
// app/Exceptions/OllamaException.php
namespace App\Exceptions;
use Exception;
class OllamaException extends Exception
{
public function __construct($message = "Ollama service error", $code = 500, Exception $previous = null)
{
parent::__construct($message, $code, $previous);
}
public function render($request)
{
return response()->json([
'success' => false,
'error' => $this->getMessage(),
'code' => $this->getCode(),
], $this->getCode());
}
}
Add logging configuration:
<?php
// config/logging.php
'channels' => [
// ... existing channels
'ollama' => [
'driver' => 'single',
'path' => storage_path('logs/ollama.log'),
'level' => env('LOG_LEVEL', 'debug'),
],
],
Testing Your AI Integration
Create comprehensive tests for your Ollama integration:
# Generate test files
php artisan make:test OllamaServiceTest
php artisan make:test AiChatControllerTest
<?php
// tests/Unit/OllamaServiceTest.php
namespace Tests\Unit;
use App\Services\OllamaService;
use GuzzleHttp\Client;
use GuzzleHttp\Handler\MockHandler;
use GuzzleHttp\HandlerStack;
use GuzzleHttp\Psr7\Response;
use PHPUnit\Framework\TestCase;
class OllamaServiceTest extends TestCase
{
protected $mockHandler;
protected $ollama;
protected function setUp(): void
{
parent::setUp();
$this->mockHandler = new MockHandler();
$handlerStack = HandlerStack::create($this->mockHandler);
$client = new Client(['handler' => $handlerStack]);
$this->ollama = new OllamaService();
$this->ollama->setClient($client);
}
public function test_generate_returns_successful_response()
{
$this->mockHandler->append(
new Response(200, [], json_encode([
'response' => 'Hello! How can I help you today?',
'model' => 'llama2',
'total_duration' => 1000000000,
]))
);
$result = $this->ollama->generate('llama2', 'Hello');
$this->assertTrue($result['success']);
$this->assertEquals('Hello! How can I help you today?', $result['response']);
$this->assertEquals('llama2', $result['model']);
}
public function test_health_check_returns_true_when_service_available()
{
$this->mockHandler->append(
new Response(200, [], json_encode(['models' => []]))
);
$isHealthy = $this->ollama->isHealthy();
$this->assertTrue($isHealthy);
}
}
Performance Optimization Strategies
Optimize your Laravel Ollama integration for better performance:
Caching Responses
<?php
// app/Services/OllamaService.php
use Illuminate\Support\Facades\Cache;
class OllamaService
{
/**
* Generate with caching for repeated queries
*/
public function generateWithCache(string $model, string $prompt, array $options = [], int $cacheTtl = 3600): array
{
$cacheKey = 'ollama_' . md5($model . $prompt . serialize($options));
return Cache::remember($cacheKey, $cacheTtl, function () use ($model, $prompt, $options) {
return $this->generate($model, $prompt, $options);
});
}
}
Queue Processing
# Generate job for background processing
php artisan make:job ProcessAiRequest
<?php
// app/Jobs/ProcessAiRequest.php
namespace App\Jobs;
use App\Services\OllamaService;
use Illuminate\Bus\Queueable;
use Illuminate\Queue\SerializesModels;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
class ProcessAiRequest implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
protected $model;
protected $prompt;
protected $userId;
public function __construct(string $model, string $prompt, int $userId)
{
$this->model = $model;
$this->prompt = $prompt;
$this->userId = $userId;
}
public function handle(OllamaService $ollama)
{
$result = $ollama->generate($this->model, $this->prompt);
// Store result or broadcast to user
// Implementation depends on your needs
}
}
Deployment Configuration
Configure your production environment for Ollama:
Docker Setup
# Dockerfile
FROM php:8.1-fpm
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Install Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Install Ollama
RUN curl -fsSL https://ollama.ai/install.sh | sh
WORKDIR /var/www
COPY . .
RUN composer install --no-interaction --prefer-dist --optimize-autoloader
# Set permissions
RUN chown -R www-data:www-data /var/www \
&& chmod -R 755 /var/www/storage
EXPOSE 9000
CMD ["php-fpm"]
Environment Variables
# Production .env
OLLAMA_BASE_URL=http://ollama:11434
OLLAMA_DEFAULT_MODEL=llama2
OLLAMA_TIMEOUT=300
OLLAMA_MAX_TOKENS=4096
OLLAMA_TEMPERATURE=0.7
# Queue configuration
QUEUE_CONNECTION=redis
REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379
Security Best Practices
Implement security measures for your AI integration:
Input Validation and Sanitization
<?php
// app/Http/Requests/AiChatRequest.php
namespace App\Http\Requests;
use Illuminate\Foundation\Http\FormRequest;
class AiChatRequest extends FormRequest
{
public function authorize()
{
return true;
}
public function rules()
{
return [
'message' => [
'required',
'string',
'max:2000',
'regex:/^[a-zA-Z0-9\s\.,!?;:()"\'-]+$/', // Basic character whitelist
],
'model' => [
'required',
'string',
'in:llama2,mistral,codellama', // Whitelist allowed models
],
];
}
public function messages()
{
return [
'message.regex' => 'Message contains invalid characters.',
'model.in' => 'Selected model is not available.',
];
}
}
Rate Limiting
<?php
// app/Http/Middleware/AiRateLimit.php
namespace App\Http\Middleware;
use Closure;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\RateLimiter;
class AiRateLimit
{
public function handle(Request $request, Closure $next)
{
$key = 'ai_requests:' . $request->ip();
if (RateLimiter::tooManyAttempts($key, 10)) {
return response()->json([
'success' => false,
'error' => 'Too many AI requests. Please try again later.',
], 429);
}
RateLimiter::hit($key, 60); // 10 requests per minute
return $next($request);
}
}
Monitoring and Analytics
Track your AI integration performance:
<?php
// app/Services/OllamaService.php
use Illuminate\Support\Facades\DB;
class OllamaService
{
/**
* Generate with metrics tracking
*/
public function generateWithMetrics(string $model, string $prompt, array $options = []): array
{
$startTime = microtime(true);
$result = $this->generate($model, $prompt, $options);
$endTime = microtime(true);
$duration = ($endTime - $startTime) * 1000; // Convert to milliseconds
// Log metrics
DB::table('ai_metrics')->insert([
'model' => $model,
'prompt_length' => strlen($prompt),
'response_length' => strlen($result['response'] ?? ''),
'duration_ms' => $duration,
'success' => $result['success'],
'created_at' => now(),
]);
return $result;
}
}
Troubleshooting Common Issues
Ollama Service Not Starting
# Check Ollama service status
systemctl status ollama
# Check logs
journalctl -u ollama -f
# Restart service
sudo systemctl restart ollama
Memory Issues
# Check available memory
free -h
# Monitor Ollama process
htop -p $(pgrep ollama)
# Use smaller models for limited memory
ollama pull llama2:7b-chat-q4_0 # Quantized 4-bit model
Network Connectivity
# Test Ollama API
curl http://localhost:11434/api/tags
# Check firewall settings
sudo ufw status
# Test from Laravel
php artisan tinker
>>> app(App\Services\OllamaService::class)->isHealthy()
Advanced Features and Extensions
Custom Model Fine-tuning
# Create custom model file
echo "FROM llama2
SYSTEM You are a helpful Laravel development assistant." > Modelfile
# Build custom model
ollama create laravel-assistant -f Modelfile
Multi-modal Capabilities
<?php
// app/Services/OllamaService.php
/**
* Generate response with image input (if model supports it)
*/
public function generateWithImage(string $model, string $prompt, string $imagePath): array
{
$imageData = base64_encode(file_get_contents($imagePath));
return $this->generate($model, $prompt, [
'images' => [$imageData]
]);
}
Conclusion
Integrating Ollama with Laravel opens up powerful AI capabilities for your web applications without external API dependencies or per-token costs. This tutorial covered the complete setup process, from installation to deployment, with practical PHP code examples and security best practices.
Key benefits of this Laravel Ollama integration include:
- Cost-effective AI features - No API fees or usage limits
- Data privacy - Everything runs locally on your servers
- Customizable models - Fine-tune AI responses for your specific needs
- Scalable architecture - Queue processing and caching for high-traffic applications
Start with the basic chat interface, then expand to more sophisticated AI features like code generation, content analysis, or customer support automation. The combination of Laravel's robust framework and Ollama's local AI capabilities provides a solid foundation for building intelligent web applications.
Real-World Implementation Examples
Building an AI-Powered Code Assistant
Create a specialized controller for code-related queries:
<?php
// app/Http/Controllers/CodeAssistantController.php
namespace App\Http\Controllers;
use App\Services\OllamaService;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Validator;
class CodeAssistantController extends Controller
{
protected $ollama;
public function __construct(OllamaService $ollama)
{
$this->ollama = $ollama;
}
/**
* Generate code based on requirements
*/
public function generateCode(Request $request)
{
$validator = Validator::make($request->all(), [
'language' => 'required|string|in:php,javascript,python,java,css,html',
'description' => 'required|string|max:1000',
'framework' => 'nullable|string|max:50',
]);
if ($validator->fails()) {
return response()->json([
'success' => false,
'errors' => $validator->errors()
], 422);
}
$language = $request->input('language');
$description = $request->input('description');
$framework = $request->input('framework');
$prompt = $this->buildCodePrompt($language, $description, $framework);
$result = $this->ollama->generate('codellama', $prompt, [
'temperature' => 0.3, // Lower temperature for more consistent code
'max_tokens' => 4096,
]);
if ($result['success']) {
// Extract code from response
$code = $this->extractCode($result['response']);
return response()->json([
'success' => true,
'code' => $code,
'explanation' => $result['response'],
'language' => $language,
]);
}
return response()->json($result);
}
/**
* Review and improve existing code
*/
public function reviewCode(Request $request)
{
$validator = Validator::make($request->all(), [
'code' => 'required|string|max:5000',
'language' => 'required|string',
'review_type' => 'required|string|in:security,performance,style,bugs',
]);
if ($validator->fails()) {
return response()->json([
'success' => false,
'errors' => $validator->errors()
], 422);
}
$code = $request->input('code');
$language = $request->input('language');
$reviewType = $request->input('review_type');
$prompt = "Please review this {$language} code for {$reviewType} issues and provide suggestions for improvement:\n\n```{$language}\n{$code}\n```\n\nProvide specific recommendations and explain why each change is beneficial.";
$result = $this->ollama->generate('codellama', $prompt, [
'temperature' => 0.4,
'max_tokens' => 3000,
]);
return response()->json($result);
}
private function buildCodePrompt(string $language, string $description, ?string $framework): string
{
$prompt = "Generate {$language} code that {$description}.";
if ($framework) {
$prompt .= " Use the {$framework} framework.";
}
$prompt .= "\n\nRequirements:\n";
$prompt .= "- Write clean, well-commented code\n";
$prompt .= "- Follow {$language} best practices\n";
$prompt .= "- Include error handling where appropriate\n";
$prompt .= "- Provide a brief explanation of the code\n\n";
return $prompt;
}
private function extractCode(string $response): string
{
// Extract code blocks from markdown-style responses
if (preg_match('/```[\w]*\n(.*?)\n```/s', $response, $matches)) {
return trim($matches[1]);
}
return $response;
}
}
Content Analysis and Summarization
Build a content processing system:
<?php
// app/Http/Controllers/ContentAnalysisController.php
namespace App\Http\Controllers;
use App\Services\OllamaService;
use Illuminate\Http\Request;
class ContentAnalysisController extends Controller
{
protected $ollama;
public function __construct(OllamaService $ollama)
{
$this->ollama = $ollama;
}
/**
* Summarize long text content
*/
public function summarize(Request $request)
{
$request->validate([
'content' => 'required|string|max:10000',
'length' => 'nullable|string|in:short,medium,long',
]);
$content = $request->input('content');
$length = $request->input('length', 'medium');
$lengthInstructions = [
'short' => 'in 2-3 sentences',
'medium' => 'in 1-2 paragraphs',
'long' => 'in 3-4 paragraphs with key points',
];
$prompt = "Summarize the following text {$lengthInstructions[$length]}. Focus on the main ideas and key information:\n\n{$content}";
$result = $this->ollama->generate('llama2', $prompt, [
'temperature' => 0.5,
'max_tokens' => 1000,
]);
return response()->json($result);
}
/**
* Analyze sentiment of text
*/
public function analyzeSentiment(Request $request)
{
$request->validate([
'text' => 'required|string|max:2000',
]);
$text = $request->input('text');
$prompt = "Analyze the sentiment of the following text. Classify it as positive, negative, or neutral, and explain your reasoning. Also provide a confidence score from 0-100:\n\n{$text}";
$result = $this->ollama->generate('llama2', $prompt, [
'temperature' => 0.3,
'max_tokens' => 500,
]);
// Parse sentiment from response
$sentiment = $this->parseSentiment($result['response']);
return response()->json([
'success' => $result['success'],
'sentiment' => $sentiment,
'analysis' => $result['response'],
]);
}
/**
* Extract key topics from text
*/
public function extractTopics(Request $request)
{
$request->validate([
'content' => 'required|string|max:5000',
'num_topics' => 'nullable|integer|min:3|max:10',
]);
$content = $request->input('content');
$numTopics = $request->input('num_topics', 5);
$prompt = "Extract the top {$numTopics} key topics or themes from the following text. For each topic, provide a brief description:\n\n{$content}";
$result = $this->ollama->generate('llama2', $prompt, [
'temperature' => 0.4,
'max_tokens' => 800,
]);
return response()->json($result);
}
private function parseSentiment(string $response): array
{
$sentiment = 'neutral';
$confidence = 50;
// Simple parsing logic - in production, use more sophisticated NLP
if (preg_match('/\b(positive|negative|neutral)\b/i', $response, $matches)) {
$sentiment = strtolower($matches[1]);
}
if (preg_match('/(\d+)%?\s*confidence/i', $response, $matches)) {
$confidence = (int) $matches[1];
}
return [
'label' => $sentiment,
'confidence' => $confidence,
];
}
}
Database Integration and Migration
Create database tables for storing AI interactions:
<?php
// database/migrations/create_ai_conversations_table.php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
public function up()
{
Schema::create('ai_conversations', function (Blueprint $table) {
$table->id();
$table->string('session_id')->index();
$table->unsignedBigInteger('user_id')->nullable();
$table->string('model');
$table->text('prompt');
$table->longText('response');
$table->json('metadata')->nullable();
$table->integer('response_time_ms')->nullable();
$table->boolean('success')->default(true);
$table->timestamps();
$table->foreign('user_id')->references('id')->on('users')->onDelete('cascade');
});
}
public function down()
{
Schema::dropIfExists('ai_conversations');
}
};
<?php
// database/migrations/create_ai_metrics_table.php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
public function up()
{
Schema::create('ai_metrics', function (Blueprint $table) {
$table->id();
$table->string('model');
$table->integer('prompt_length');
$table->integer('response_length')->nullable();
$table->integer('duration_ms');
$table->boolean('success');
$table->string('error_message')->nullable();
$table->string('ip_address')->nullable();
$table->timestamp('created_at');
$table->index(['model', 'created_at']);
$table->index('success');
});
}
public function down()
{
Schema::dropIfExists('ai_metrics');
}
};
Enhanced Service with Database Logging
<?php
// app/Services/EnhancedOllamaService.php
namespace App\Services;
use App\Models\AiConversation;
use App\Models\AiMetric;
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Log;
class EnhancedOllamaService extends OllamaService
{
/**
* Generate with full logging and persistence
*/
public function generateWithLogging(
string $model,
string $prompt,
array $options = [],
?string $sessionId = null,
?int $userId = null
): array {
$startTime = microtime(true);
$result = $this->generate($model, $prompt, $options);
$endTime = microtime(true);
$duration = ($endTime - $startTime) * 1000;
// Log to database
$this->logConversation($model, $prompt, $result, $sessionId, $userId, $duration);
$this->logMetrics($model, $prompt, $result, $duration);
return $result;
}
/**
* Get conversation history for session
*/
public function getConversationHistory(string $sessionId, int $limit = 50): array
{
return AiConversation::where('session_id', $sessionId)
->orderBy('created_at', 'desc')
->limit($limit)
->get()
->map(function ($conversation) {
return [
'id' => $conversation->id,
'prompt' => $conversation->prompt,
'response' => $conversation->response,
'model' => $conversation->model,
'timestamp' => $conversation->created_at->toISOString(),
'response_time' => $conversation->response_time_ms,
];
})
->toArray();
}
/**
* Get usage statistics
*/
public function getUsageStats(int $days = 30): array
{
$stats = DB::table('ai_metrics')
->where('created_at', '>=', now()->subDays($days))
->selectRaw('
model,
COUNT(*) as total_requests,
AVG(duration_ms) as avg_response_time,
SUM(CASE WHEN success = 1 THEN 1 ELSE 0 END) as successful_requests,
AVG(prompt_length) as avg_prompt_length,
AVG(response_length) as avg_response_length
')
->groupBy('model')
->get()
->toArray();
return array_map(function ($stat) {
return (array) $stat;
}, $stats);
}
private function logConversation(
string $model,
string $prompt,
array $result,
?string $sessionId,
?int $userId,
float $duration
): void {
try {
AiConversation::create([
'session_id' => $sessionId ?? 'anonymous',
'user_id' => $userId,
'model' => $model,
'prompt' => $prompt,
'response' => $result['response'] ?? '',
'metadata' => [
'total_duration' => $result['total_duration'] ?? 0,
'success' => $result['success'],
'error' => $result['error'] ?? null,
],
'response_time_ms' => (int) $duration,
'success' => $result['success'],
]);
} catch (\Exception $e) {
Log::error('Failed to log conversation: ' . $e->getMessage());
}
}
private function logMetrics(string $model, string $prompt, array $result, float $duration): void
{
try {
AiMetric::create([
'model' => $model,
'prompt_length' => strlen($prompt),
'response_length' => strlen($result['response'] ?? ''),
'duration_ms' => (int) $duration,
'success' => $result['success'],
'error_message' => $result['error'] ?? null,
'ip_address' => request()->ip(),
]);
} catch (\Exception $e) {
Log::error('Failed to log metrics: ' . $e->getMessage());
}
}
}
Model Classes
<?php
// app/Models/AiConversation.php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Relations\BelongsTo;
class AiConversation extends Model
{
protected $fillable = [
'session_id',
'user_id',
'model',
'prompt',
'response',
'metadata',
'response_time_ms',
'success',
];
protected $casts = [
'metadata' => 'array',
'success' => 'boolean',
];
public function user(): BelongsTo
{
return $this->belongsTo(User::class);
}
}
<?php
// app/Models/AiMetric.php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
class AiMetric extends Model
{
public $timestamps = false;
protected $fillable = [
'model',
'prompt_length',
'response_length',
'duration_ms',
'success',
'error_message',
'ip_address',
'created_at',
];
protected $casts = [
'success' => 'boolean',
'created_at' => 'timestamp',
];
protected static function boot()
{
parent::boot();
static::creating(function ($model) {
$model->created_at = now();
});
}
}
API Documentation and OpenAPI Spec
Create comprehensive API documentation:
<?php
// routes/api.php - with OpenAPI annotations
use App\Http\Controllers\AiChatController;
use App\Http\Controllers\CodeAssistantController;
use App\Http\Controllers\ContentAnalysisController;
/**
* @OA\Info(
* title="Laravel Ollama API",
* version="1.0.0",
* description="AI-powered web application API using Ollama and Laravel"
* )
*/
Route::prefix('ai')->middleware(['throttle:60,1', 'auth:sanctum'])->group(function () {
/**
* @OA\Post(
* path="/ai/chat",
* summary="Send chat message to AI",
* @OA\RequestBody(
* required=true,
* @OA\JsonContent(
* @OA\Property(property="message", type="string", example="Hello, how are you?"),
* @OA\Property(property="model", type="string", example="llama2")
* )
* ),
* @OA\Response(response=200, description="AI response")
* )
*/
Route::post('/chat', [AiChatController::class, 'chat']);
/**
* @OA\Post(
* path="/ai/code/generate",
* summary="Generate code based on description",
* @OA\RequestBody(
* required=true,
* @OA\JsonContent(
* @OA\Property(property="language", type="string", example="php"),
* @OA\Property(property="description", type="string", example="Create a function to validate email addresses"),
* @OA\Property(property="framework", type="string", example="laravel")
* )
* ),
* @OA\Response(response=200, description="Generated code")
* )
*/
Route::post('/code/generate', [CodeAssistantController::class, 'generateCode']);
/**
* @OA\Post(
* path="/ai/content/summarize",
* summary="Summarize text content",
* @OA\RequestBody(
* required=true,
* @OA\JsonContent(
* @OA\Property(property="content", type="string"),
* @OA\Property(property="length", type="string", enum={"short", "medium", "long"})
* )
* ),
* @OA\Response(response=200, description="Content summary")
* )
*/
Route::post('/content/summarize', [ContentAnalysisController::class, 'summarize']);
Route::get('/models', [AiChatController::class, 'models']);
Route::get('/health', [AiChatController::class, 'health']);
Route::get('/stats', [AiChatController::class, 'getStats']);
});
Production Deployment with Docker Compose
# docker-compose.yml
version: '3.8'
services:
app:
build: .
ports:
- "8000:8000"
depends_on:
- mysql
- redis
- ollama
environment:
- DB_HOST=mysql
- REDIS_HOST=redis
- OLLAMA_BASE_URL=http://ollama:11434
volumes:
- .:/var/www
- ./storage:/var/www/storage
command: php artisan serve --host=0.0.0.0 --port=8000
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
environment:
- OLLAMA_ORIGINS=*
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
mysql:
image: mysql:8.0
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: laravel_ollama
MYSQL_USER: laravel
MYSQL_PASSWORD: password
volumes:
- mysql_data:/var/lib/mysql
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- app
volumes:
ollama_data:
mysql_data:
redis_data:
Performance Monitoring Dashboard
<?php
// app/Http/Controllers/DashboardController.php
namespace App\Http\Controllers;
use App\Services\EnhancedOllamaService;
use Illuminate\Http\Request;
class DashboardController extends Controller
{
protected $ollama;
public function __construct(EnhancedOllamaService $ollama)
{
$this->ollama = $ollama;
}
public function index()
{
$stats = $this->ollama->getUsageStats(30);
$health = $this->ollama->isHealthy();
$models = $this->ollama->getModels();
return view('dashboard.ai-analytics', compact('stats', 'health', 'models'));
}
public function apiStats()
{
return response()->json([
'usage_stats' => $this->ollama->getUsageStats(7),
'health' => $this->ollama->isHealthy(),
'models' => $this->ollama->getModels(),
'timestamp' => now()->toISOString(),
]);
}
}
This comprehensive Laravel Ollama integration provides a production-ready foundation for AI-powered web applications. The tutorial covers everything from basic setup to advanced features like code generation, content analysis, database logging, and performance monitoring. With proper deployment and security measures, you can build sophisticated AI applications that run entirely on your infrastructure.