Problem: Prompt Templates Scattered Across Your Codebase
Your prompt strings live in Python files, .env configs, Notion docs, and Slack messages. When a prompt changes, you update it in one place and break it in three others. There's no versioning, no sharing, and no way to test a prompt change without redeploying.
LangSmith Hub solves this. It's a hosted registry for prompt templates — push once, pull anywhere, version everything.
You'll learn:
- How to push a prompt template to LangSmith Hub from Python
- How to pull and use it in a LangChain chain
- How to version prompts and roll back when something breaks
Time: 20 min | Difficulty: Intermediate
Why Prompt Management Breaks Without a Registry
Prompts are code. Changing a system prompt changes model behavior — often as dramatically as changing the model itself. Without versioning:
- You can't reproduce a past run because you don't know which prompt it used
- Teammates overwrite working prompts without knowing what they changed
- A/B testing prompts requires deploying new code, not swapping a config
LangSmith Hub gives each prompt a URI (owner/prompt-name:commit-hash) so any code can reference an exact prompt version.
Symptoms this solves:
- "The bot started giving weird answers last Tuesday" — now you can check what changed
- Copy-pasted prompt strings diverging across three microservices
- No audit trail for prompt edits in production
Setup
Step 1: Install Dependencies
# LangSmith SDK includes Hub support from 0.1.x
pip install langsmith langchain-core langchain-openai
Verify the install:
python -c "import langsmith; print(langsmith.__version__)"
Expected: 0.2.x or higher
Step 2: Authenticate with LangSmith
export LANGSMITH_API_KEY="ls__your_key_here"
export LANGSMITH_TRACING=true # optional but recommended — traces all runs
export OPENAI_API_KEY="sk-..."
Get your API key from smith.langchain.com → Settings → API Keys.
To make these persistent, add them to your .env and load with python-dotenv:
from dotenv import load_dotenv
load_dotenv()
Pushing a Prompt to Hub
Step 3: Define and Push Your First Prompt
from langsmith import Client
from langchain_core.prompts import ChatPromptTemplate
client = Client()
# Define the prompt — input variables become slots callers fill at runtime
prompt = ChatPromptTemplate.from_messages([
("system", "You are a concise technical writer. Answer in {response_style} style."),
("human", "{question}"),
])
# Push to Hub — creates the prompt if it doesn't exist, new commit if it does
url = client.push_prompt(
"my-org/tech-writer", # owner/prompt-name — owner must match your Hub username
object=prompt,
)
print(url)
# https://smith.langchain.com/hub/my-org/tech-writer
If it fails:
403 Forbidden→ Your API key doesn't have write access. Check Settings → API Keys → permissions.Owner not found→ Replacemy-orgwith your exact LangSmith username or org slug.
Every push creates a new commit. The URL printed includes the commit hash — bookmark it to pin this exact version.
Step 4: Add a README to the Prompt (Optional but Useful)
# Push with a description so teammates know what this prompt is for
url = client.push_prompt(
"my-org/tech-writer",
object=prompt,
readme="""
## tech-writer
Formats answers in a specified style. Used by the docs assistant and the support bot.
**Input variables:**
- `question` — the user's question
- `response_style` — e.g., "bullet points", "one sentence", "step-by-step"
**Tested with:** gpt-4o, claude-3-5-sonnet-20241022
""",
)
The README renders on the Hub UI. Anyone who finds the prompt sees exactly what it does and what inputs it expects.
Pulling and Using a Prompt
Step 5: Pull the Prompt in Any Project
from langsmith import Client
from langchain_openai import ChatOpenAI
client = Client()
# Pull latest version
prompt = client.pull_prompt("my-org/tech-writer")
# Pin to a specific commit for reproducibility in production
# prompt = client.pull_prompt("my-org/tech-writer:abc123de")
llm = ChatOpenAI(model="gpt-4o-mini")
chain = prompt | llm
response = chain.invoke({
"question": "What is a vector index?",
"response_style": "one sentence",
})
print(response.content)
Expected output: A single-sentence answer about vector indexes.
To find the commit hash: open the prompt on smith.langchain.com/hub, click "Commits", copy the hash from the version you want to pin.
Step 6: Use Hub Prompts Without Instantiating a Client
The hub.pull() shortcut works if you just need the prompt object fast:
from langchain import hub
# Equivalent to client.pull_prompt() — uses LANGSMITH_API_KEY from env
prompt = hub.pull("my-org/tech-writer")
# Pull a pinned version
prompt_v1 = hub.pull("my-org/tech-writer:abc123de")
Use hub.pull() in application code and client.push_prompt() in management scripts. Keep the responsibilities separate.
Versioning and Rollback
Step 7: List Commits and Roll Back
# List all commits for a prompt
commits = client.list_prompt_commits("my-org/tech-writer")
for commit in commits:
print(commit.commit_hash, commit.created_at, commit.message or "(no message)")
To roll back in production, update the hash in your pull call:
# Roll back to last known good version
prompt = hub.pull("my-org/tech-writer:PREVIOUS_HASH")
No redeployment needed — just update the hash string and restart the process. This is the core value: prompt changes are config changes, not code changes.
Sharing Prompts Across a Team
Public prompts (visible to everyone on Hub) work for open-source projects. For internal use, keep prompts private under your org slug and share access via LangSmith's team settings.
Org workflow:
# All team members pull from the same org slug
prompt = hub.pull("acme-corp/customer-support-v2")
Set LANGSMITH_HUB_API_URL if you're on a self-hosted LangSmith instance:
export LANGSMITH_HUB_API_URL="https://your-langsmith.internal/api"
Verification
Run this to confirm your push-pull roundtrip works end to end:
from langsmith import Client
from langchain_core.prompts import ChatPromptTemplate
client = Client()
test_prompt = ChatPromptTemplate.from_messages([
("human", "Say hello in {language}."),
])
client.push_prompt("my-org/hello-test", object=test_prompt)
pulled = client.pull_prompt("my-org/hello-test")
assert pulled.input_variables == ["language"], "Input variables did not round-trip correctly"
print("✅ Push/pull verified")
You should see: ✅ Push/pull verified
What You Learned
client.push_prompt()creates a versioned commit on LangSmith Hub — every push is non-destructivehub.pull("owner/name:hash")pins to an exact version — use this in production, nothub.pull("owner/name")which always fetches latest- Prompt READMEs are worth writing — they save your teammates the same 10 minutes you spent figuring out the input variables
- Rolling back a broken prompt is a one-line config change, not a deploy
Limitation: LangSmith Hub stores ChatPromptTemplate and PromptTemplate objects. Custom prompt classes that don't serialize to LangChain's schema won't push cleanly — convert them first.
Tested on LangSmith 0.2.x, LangChain Core 0.3.x, Python 3.12, macOS and Ubuntu 24.04