Problem: Your LangChain Agent Can't See Outside Its Context Window
LangChain agents are powerful — until they need real data. File contents, database records, running services, live APIs: none of it reaches your LLM unless you wire it up manually. Every new data source means a new custom tool, new parsing logic, new error handling.
Model Context Protocol (MCP) solves this with a standard interface. One integration pattern, any number of external servers.
You'll learn:
- How MCP servers expose tools and resources to LangChain agents
- How to connect an MCP server using
langchain-mcp-adapters - How to build an agent that queries a live filesystem and a REST API through MCP
Time: 20 min | Difficulty: Intermediate
Why This Happens
LangChain's tool system requires you to define each capability as a Python function with a schema. That works fine for 2–3 tools. At scale — or when tools change — you're rewriting glue code constantly.
MCP standardizes how servers expose tools, resources, and prompts over a transport layer (stdio or SSE). Your LangChain agent connects once to an MCP client, discovers all available tools automatically, and calls them through a unified interface.
What you get:
- Tool definitions auto-discovered at runtime — no manual schema writing
- Any MCP-compatible server works without code changes on the agent side
- Servers run as separate processes — isolation by default
How MCP Fits Into LangChain
LangChain Agent
│
▼
MCP Client (MultiServerMCPClient)
│
├──stdio──▶ filesystem-server (reads local files)
└──sse────▶ your-api-server (calls REST endpoints)
The agent sees MCP tools as ordinary LangChain BaseTool instances. Everything underneath — transport, serialization, process management — is handled by the adapter layer.
Solution
Step 1: Install Dependencies
# langchain-mcp-adapters bridges LangChain and MCP
pip install langchain-mcp-adapters langchain-openai langgraph
# The reference filesystem MCP server (from Anthropic)
npm install -g @modelcontextprotocol/server-filesystem
Verify the MCP server binary is available:
npx @modelcontextprotocol/server-filesystem --help
Expected output: usage info with --allow-path flag listed.
Step 2: Connect to an MCP Server
MultiServerMCPClient manages connections and exposes tools as a list LangChain can consume.
import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
async def get_mcp_tools():
client = MultiServerMCPClient(
{
"filesystem": {
"command": "npx",
# Restrict access to your project directory only
"args": [
"@modelcontextprotocol/server-filesystem",
"--allow-path", "/home/user/project"
],
"transport": "stdio",
}
}
)
# Returns list[BaseTool] — works with any LangChain agent
tools = await client.get_tools()
return tools, client
The transport: stdio option spawns the server as a subprocess. For remote servers, use transport: sse with a url key instead.
Step 3: Build the Agent
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
async def run_agent(query: str):
tools, client = await get_mcp_tools()
llm = ChatOpenAI(model="gpt-4o", temperature=0)
# create_react_agent accepts any list[BaseTool] — MCP tools included
agent = create_react_agent(llm, tools)
async with client:
result = await agent.ainvoke(
{"messages": [{"role": "user", "content": query}]}
)
return result["messages"][-1].content
The async with client: block ensures MCP server processes are cleaned up after the agent finishes. Skipping this leaks subprocesses.
Step 4: Add a Second MCP Server (SSE Transport)
Real agents usually need more than one data source. Here's how to add a remote server alongside the local one:
client = MultiServerMCPClient(
{
"filesystem": {
"command": "npx",
"args": [
"@modelcontextprotocol/server-filesystem",
"--allow-path", "/home/user/project"
],
"transport": "stdio",
},
"my-api": {
# Remote MCP server served over SSE
"url": "http://localhost:8000/mcp/sse",
"transport": "sse",
}
}
)
Tool names are namespaced by server key (filesystem__read_file, my-api__get_user) so there are no collisions when servers expose tools with identical names.
Step 5: Run It
if __name__ == "__main__":
answer = asyncio.run(
run_agent("List the Python files in the project root and summarize what each one does.")
)
print(answer)
Verification
# Print discovered tools before running the agent
async def list_tools():
tools, client = await get_mcp_tools()
async with client:
for tool in tools:
print(f"{tool.name}: {tool.description}")
asyncio.run(list_tools())
You should see output like:
filesystem__read_file: Read the complete contents of a file from the file system.
filesystem__list_directory: Get a listing of all files and directories in a path.
filesystem__search_files: Recursively search for files and directories matching a pattern.
If the list is empty, the MCP server failed to start. Run the npx command manually to check for errors.
If it fails:
Error: Cannot find module '@modelcontextprotocol/server-filesystem'→ Runnpm install -gagain withsudoon LinuxTransport closed unexpectedly→ The--allow-pathargument is missing or the path doesn't existTool call returned error: Permission denied→ The path you passed to--allow-pathmust be absolute, not relative
Building Your Own MCP Server
When you need to expose a custom API or internal service, write a minimal MCP server with the Python SDK:
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp import types
app = Server("my-api-server")
@app.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="get_order_status",
description="Fetch the current status of an order by ID.",
inputSchema={
"type": "object",
"properties": {
"order_id": {"type": "string", "description": "The order UUID"}
},
"required": ["order_id"]
}
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "get_order_status":
order_id = arguments["order_id"]
# Replace with your actual API call
status = fetch_order_from_db(order_id)
return [types.TextContent(type="text", text=status)]
async def main():
async with stdio_server() as (read_stream, write_stream):
await app.run(read_stream, write_stream, app.create_initialization_options())
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Register it in MultiServerMCPClient the same way as any other stdio server:
"my-api": {
"command": "python",
"args": ["my_mcp_server.py"],
"transport": "stdio",
}
Production Considerations
Server lifecycle: MCP servers are spawned per client instantiation. For long-running services (web apps, background workers), keep the client alive across requests rather than reconnecting on every call. Connection setup adds 200–500ms of latency.
Error isolation: If one MCP server crashes, MultiServerMCPClient raises on tool calls to that server but leaves other servers running. Wrap individual tool calls in try/except to degrade gracefully.
Security: stdio servers inherit the process environment. Don't pass secrets through environment variables unless you've audited what the server does with them. For untrusted third-party servers, use SSE transport behind a network boundary instead of running them as local subprocesses.
What You Learned
- MCP gives LangChain agents a standard way to discover and call external tools without per-tool glue code
MultiServerMCPClienthandles transport, process management, and tool namespacing- The same agent code works with filesystem servers, REST APIs, databases — any MCP-compatible server
- Writing your own MCP server takes ~50 lines of Python
Limitation: MCP tool calls are async-only. Synchronous LangChain tool wrappers (Tool) won't work here — use create_react_agent from LangGraph or another async-native agent executor.
Tested on langchain-mcp-adapters 0.1.x, LangChain 0.3.x, LangGraph 0.2.x, Python 3.12, macOS & Ubuntu 24.04