LangChain
Browse articles on LangChain — tutorials, guides, and in-depth comparisons.
LangChain is the most widely used Python framework for building LLM-powered applications. Its modular abstractions — chains, retrievers, agents, and memory — let you compose complex AI workflows without reinventing common patterns.
LangChain Ecosystem
| Tool | Role | When to use |
|---|---|---|
| LangChain | Core framework, chains, agents | Connecting LLMs to data and tools |
| LangGraph | Stateful multi-agent workflows | Complex agent orchestration |
| LangSmith | Observability, eval, monitoring | Debugging and testing in production |
| LlamaIndex | Data indexing, RAG patterns | Document-heavy applications |
| LangServe | Serve chains as REST API | Deploying chains as microservices |
Quick Start — RAG Pipeline
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import PGVector
from langchain.chains import RetrievalQA
# Embedding model
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
# Vector store (PostgreSQL + pgvector)
vectorstore = PGVector(
connection_string="postgresql://user:pass@localhost/db",
embedding_function=embeddings,
collection_name="docs",
)
# RAG chain
qa_chain = RetrievalQA.from_chain_type(
llm=ChatOpenAI(model="gpt-4o"),
retriever=vectorstore.as_retriever(search_kwargs={"k": 5}),
)
answer = qa_chain.invoke("What is LangGraph used for?")
LCEL — LangChain Expression Language
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
# Compose chains with | operator
chain = (
ChatPromptTemplate.from_template("Summarize: {text}")
| ChatOpenAI(model="gpt-4o")
| StrOutputParser()
)
result = chain.invoke({"text": "Your document here..."})
Learning Path
- LCEL basics — prompt templates, model invocation, output parsers
- Retrieval chains — document loaders, text splitters, vector stores
- Agents — tool definitions, ReAct agent, tool calling
- LangGraph — stateful workflows, human-in-the-loop, multi-agent
- LangSmith — tracing, evaluation datasets, CI/CD testing
- Production — streaming, async, caching, error handling
Showing 31–41 of 41 articles · Page 2 of 2
- How to Implement Custom Memory Systems in LangChain for Advanced AI Applications
- How to Implement Async Processing in LangChain Applications
- How to Build Custom LangChain Agents for Specific Domains: Complete Developer Guide
- Discord Bot Development with LangChain and Python: Complete Tutorial for AI-Powered Bots
- LangChain Performance Optimization: Reduce Latency by 60% with These 8 Proven Techniques
- LlamaIndex vs LangChain: Which Framework to Choose in 2025?
- LangChain vs LlamaIndex 2025: Complete Performance Comparison for Building AI Applications
- LangChain 0.3: Reducing sprintf Overhead in AI Agent Prompt Engineering
- GDPR 2025 Compliance for AI Agents: How to Audit Data Privacy in LangChain-Powered Workflows
- AI-Native Apps: Build with OpenAI Codex 2 and LangChain 5
- Unlock the Power of LLMs: A Step-by-Step Guide to Building with LangChain