LangSmith Self-Hosted: Deploy on Your Infrastructure 2026

Deploy LangSmith on your own server with Docker Compose. Full setup for tracing, evaluation, and LLM observability on private infrastructure.

Problem: LangSmith Cloud Doesn't Work for Your Use Case

You're logging LLM traces to LangSmith Cloud, but your company has data residency requirements, a tight SaaS budget, or you simply can't send production prompts off-premises. The self-hosted option exists — but the setup requires coordinating Postgres, Redis, object storage, and several LangSmith services at once.

You'll learn:

  • How to run LangSmith entirely on your own server with Docker Compose
  • How to configure storage, authentication, and API keys
  • How to point your existing LangChain or LangGraph apps at the self-hosted instance

Time: 30 min | Difficulty: Intermediate


Why This Happens

LangSmith's self-hosted deployment ships as a set of Docker images coordinated by an official docker-compose.yml. It needs:

  • PostgreSQL 14+ — stores runs, projects, feedback
  • Redis 6+ — queues and caching
  • Object storage — S3-compatible bucket for large payloads (traces with images, long outputs)
  • Five LangSmith servicesbackend, frontend, platform-backend, worker, playground

Miss any dependency or misconfigure an env variable and the services silently fail to connect. This guide wires them all up correctly the first time.

Symptoms if misconfigured:

  • frontend loads but traces never appear
  • LANGCHAIN_ENDPOINT set correctly but SDK throws 401 Unauthorized
  • Worker container restarts in a loop with redis connection refused

Prerequisites

  • Docker 24+ and Docker Compose v2 (docker compose not docker-compose)
  • A server or VM with at least 4 vCPU / 16GB RAM / 50GB disk
  • An S3-compatible bucket (AWS S3, MinIO, Cloudflare R2, or Backblaze B2)
  • A LangSmith license key — request one at smith.langchain.com/settings under Self-Hosted

Solution

Step 1: Download the Official Compose File

LangChain ships a pinned release of the Compose config. Don't write your own — use theirs and override via .env.

# Pull the latest stable self-hosted release
curl -O https://raw.githubusercontent.com/langchain-ai/langsmith-sdk/main/python/langsmith/self_hosted/docker-compose.yaml

# Also grab the example env file
curl -O https://raw.githubusercontent.com/langchain-ai/langsmith-sdk/main/python/langsmith/self_hosted/.env.example

cp .env.example .env

Expected output:

docker-compose.yaml  .env

Step 2: Configure the .env File

Open .env and set these required values. Everything else can stay as the default for a first deployment.

# ── License ──────────────────────────────────────────────────────
LANGSMITH_LICENSE_KEY=ls-your-license-key-here

# ── Postgres ─────────────────────────────────────────────────────
POSTGRES_DB=langsmith
POSTGRES_USER=langsmith
POSTGRES_PASSWORD=changeme-use-a-real-secret

# ── Redis ────────────────────────────────────────────────────────
# Default internal Redis works; override only if using external Redis
# REDIS_URL=redis://your-redis:6379

# ── Object Storage (S3-compatible) ───────────────────────────────
# LangSmith stores large trace payloads here — required for production
LANGSMITH_S3_BUCKET=your-bucket-name
LANGSMITH_S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=your-secret

# For MinIO or R2, also set:
# LANGSMITH_S3_ENDPOINT=http://minio:9000

# ── Auth ─────────────────────────────────────────────────────────
# A secret used to sign JWTs — generate with: openssl rand -hex 32
AUTH_TOKEN_SECRET=generate-a-random-hex-string-here

# ── API key for your SDK clients ─────────────────────────────────
# This is the key you'll set in LANGCHAIN_API_KEY on your app servers
LANGSMITH_API_KEY=ls-selfhosted-mykey-here

If using MinIO locally instead of S3:

# Add a MinIO service to docker-compose.yaml or run it separately
docker run -d \
  --name minio \
  -p 9000:9000 -p 9001:9001 \
  -e MINIO_ROOT_USER=langsmith \
  -e MINIO_ROOT_PASSWORD=langsmith123 \
  minio/minio server /data --console-address ":9001"

# Then create the bucket
docker exec minio mc alias set local http://localhost:9000 langsmith langsmith123
docker exec minio mc mb local/langsmith-traces

Step 3: Start the Stack

# Pull images first (saves time vs pull-on-start)
docker compose pull

# Start all services detached
docker compose up -d

# Watch startup — backend and worker take ~30s to initialize
docker compose logs -f backend worker

Expected output (after ~45 seconds):

backend-1  | INFO:     Application startup complete.
worker-1   | [INFO] Worker ready. Listening for jobs...

If it fails:

  • worker-1 exited with code 1 + redis error → Check REDIS_URL is reachable; by default it uses the internal redis service, which must start first. Run docker compose restart worker after Redis is healthy.
  • backend-1 loops with psycopg2.OperationalError → Postgres hasn't finished initializing. Wait 15 seconds and check with docker compose ps.
  • S3 NoSuchBucket → Bucket name in .env doesn't match what you created.

Step 4: Create Your First User

LangSmith self-hosted ships without a default admin account. Create one via the CLI:

docker compose exec backend python -m langsmith.cli create-user \
  --email admin@yourcompany.com \
  --password changeme \
  --role admin

Then open http://your-server:80 in a browser and log in.

If running on a remote server, forward port 80 locally for the initial setup:

ssh -L 8080:localhost:80 user@your-server
# Then open http://localhost:8080

Step 5: Point Your App at the Self-Hosted Instance

In every application that uses LangChain, LangGraph, or the LangSmith SDK, replace the default Cloud endpoint with your server.

# Set these environment variables on your app server
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_ENDPOINT=http://your-server:80
export LANGCHAIN_API_KEY=ls-selfhosted-mykey-here   # matches LANGSMITH_API_KEY in .env
export LANGCHAIN_PROJECT=my-project

Or in Python, configure at runtime:

import os
from langsmith import Client

# Explicitly target self-hosted instance
client = Client(
    api_url="http://your-server:80",
    api_key=os.environ["LANGCHAIN_API_KEY"],
)

Run a quick trace to confirm it works:

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# With env vars set above, this trace goes to your self-hosted instance
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Answer briefly: {question}")
chain = prompt | llm

result = chain.invoke({"question": "What is LangSmith?"})
print(result.content)

Verification

# Check all 5 services are running
docker compose ps

You should see all services in running (healthy) state:

NAME                    STATUS
langsmith-backend-1     running (healthy)
langsmith-frontend-1    running (healthy)
langsmith-worker-1      running (healthy)
langsmith-postgres-1    running (healthy)
langsmith-redis-1       running (healthy)

Then open http://your-server:80, navigate to your project, and confirm the trace from Step 5 appears with full input/output recorded.


Production Hardening

Before sending real traffic, add these three things:

1. Put a reverse proxy in front

Don't expose port 80 directly. Use Nginx or Caddy with TLS:

server {
    listen 443 ssl;
    server_name langsmith.yourcompany.com;

    ssl_certificate     /etc/letsencrypt/live/langsmith.yourcompany.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/langsmith.yourcompany.com/privkey.pem;

    location / {
        proxy_pass http://localhost:80;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

2. Persist Postgres data outside the container

By default, Postgres data lives in a named Docker volume. For production, bind-mount to a dedicated disk:

# In docker-compose.yaml, under the postgres service
volumes:
  - /data/langsmith/postgres:/var/lib/postgresql/data

3. Set up daily Postgres backups

# Add to crontab: daily dump at 2am
0 2 * * * docker exec langsmith-postgres-1 \
  pg_dump -U langsmith langsmith | gzip \
  > /backups/langsmith-$(date +\%F).sql.gz

What You Learned

  • LangSmith self-hosted is a five-service Docker stack — Postgres, Redis, and S3-compatible storage are all required before the services will stay healthy
  • The LANGCHAIN_ENDPOINT env variable is all your app needs to switch from Cloud to self-hosted
  • A reverse proxy with TLS is essential before putting this in front of production traffic

Limitation: Self-hosted LangSmith does not currently include the LangSmith Hub (shared prompt registry) or automated model-based evaluators that call OpenAI — those features remain Cloud-only as of 2026.

Tested on LangSmith self-hosted v0.6.x, Docker Compose v2.24, Ubuntu 24.04