AI Agents
Browse articles on AI Agents — tutorials, guides, and in-depth comparisons.
AI agents are LLM-powered systems that can plan, use tools, and take multi-step actions to complete complex tasks. In 2026, agent frameworks have matured enough for production use — here's how to build reliable ones.
Framework Comparison
| Framework | Best for | Language | Complexity |
|---|---|---|---|
| LangGraph | Complex stateful workflows, human-in-the-loop | Python | High |
| CrewAI | Role-based multi-agent teams | Python | Medium |
| AutoGen | Conversational multi-agent research | Python | Medium |
| n8n | Visual automation + AI nodes | Visual/JS | Low |
| Flowise | No-code RAG chatbots and pipelines | Visual | Low |
| Dify | Full-stack LLM app platform | Visual/API | Low |
Core Agent Architecture
Every agent needs four things:
- LLM backbone — the model that reasons and decides (GPT-4o, Claude 3.5, Llama 3.3)
- Tools — functions the agent can call (web search, code execution, database queries)
- Memory — short-term (conversation history) and long-term (vector store)
- Orchestration — the loop that decides when to call tools vs return a final answer
Quick Start with LangGraph
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langchain_community.tools.tavily_search import TavilySearchResults
llm = ChatOpenAI(model="gpt-4o")
tools = [TavilySearchResults(max_results=3)]
agent = create_react_agent(llm, tools)
result = agent.invoke({"messages": [("user", "What's the latest on LangGraph?")]})
print(result["messages"][-1].content)
Learning Path
- Single-tool ReAct agent — understand the think → act → observe loop
- Multi-tool agent — add web search, code execution, database access
- Memory patterns — conversation buffer, vector store for long-term recall
- Multi-agent systems — supervisor + worker pattern with CrewAI or LangGraph
- Human-in-the-loop — approval gates, interrupt and resume
- Production — streaming, error recovery, observability with LangSmith
Showing 1–30 of 890 articles · Page 1 of 30
- LangGraph vs AutoGen: Multi-Agent Framework for Developers 2026
- LangGraph Tool Node: Integrate Custom Functions Into Agents
- LangGraph Time Travel: Replay and Branch Agent History
- LangGraph Subgraphs: Build Composable Agent Architecture 2026
- LangGraph Studio: Visual Debugger for Agent Graphs
- LangGraph Streaming: Real-Time Token Output to Frontend
- LangGraph State Machine: Complex Branching Logic Guide
- LangGraph ReAct Agent: Tool-Calling from Scratch
- LangGraph Persistence: Checkpointing Long-Running Workflows
- LangGraph Parallel Execution: Fan-Out and Fan-In Patterns
- LangGraph Memory: Short-Term and Long-Term Storage Patterns
- LangGraph Interrupt: Pause and Resume Agent Execution
- LangGraph Human-in-the-Loop: Approval Gates in AI Workflows
- LangGraph Cloud: Managed Deployment for Agent Workflows
- Deploy LangGraph with LangServe and Docker: Production Setup 2026
- LangGraph vs CrewAI: Multi-Agent Performance and Cost in Production 2026
- CrewAI with RAG: Add a Knowledge Base to Your Agent Teams
- CrewAI Replay: Resume Failed Crew Runs Without Restarting
- CrewAI Output Pydantic: Structured Agent Results in 2026
- CrewAI Kickoff Async: Non-Blocking Agent Execution Guide
- CrewAI Hierarchical Process: Manager and Worker Agents
- CrewAI Flows: Event-Driven Agent Orchestration Tutorial 2026
- CrewAI Enterprise: Team Collaboration and Access Control Guide
- CrewAI Custom Tools: Connect Agents to External APIs
- CrewAI Code Interpreter: Execute Python in Agent Workflows
- CrewAI 1.10.1 New Features: What Changed in 2026
- XGBoost from 68% to 84% AUC: Feature Engineering, Hyperparameter Tuning, and SHAP Explanation
- Transfer Learning with ResNet and EfficientNet: 95% Accuracy on 500 Images
- Serving 10,000 req/s with Ray Serve: Dynamic Batching, Model Multiplexing, and GPU Utilization
- Replacing pandas with Polars in Production: 25x Faster ETL with Lazy Evaluation