Framework Migration Guide
Common migration paths between agent frameworks with side-by-side code examples. Each section shows the before and after pattern, migration effort, and specific steps.
LangChain's sequential chains become hard to manage for multi-step, conditional, or cyclical workflows. LangGraph gives you explicit state management and conditional branching.
Before — LangChain
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({"input": "What is the weather in Paris?"})After — LangGraph
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
graph = create_react_agent(llm, tools=tools)
# Invoke as a graph — returns full state
result = graph.invoke({"messages": [("human", "What is the weather in Paris?")]})
print(result["messages"][-1].content)Migration steps
- Tools transfer directly — LangGraph uses the same Tool interface.
- Replace AgentExecutor.invoke() with graph.invoke() and update key from 'input' to 'messages'.
- Add checkpointing with MemorySaver if you need state persistence across calls.
- For complex workflows, define your own StateGraph instead of using create_react_agent.
AutoGen→OpenAI Agents SDK
Low effortAutoGen's conversation-based orchestration can be verbose for simpler single-agent tasks. The OpenAI Agents SDK offers a leaner API with native handoffs.
Before — AutoGen
import autogen
assistant = autogen.AssistantAgent(
name="assistant",
llm_config={"model": "gpt-4o", "api_key": "..."},
system_message="You are a helpful assistant.",
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
code_execution_config=False,
)
user_proxy.initiate_chat(assistant, message="Summarize recent AI news.")After — OpenAI Agents SDK
from agents import Agent, Runner
agent = Agent(
name="assistant",
instructions="You are a helpful assistant.",
model="gpt-4o",
tools=tools,
)
result = Runner.run_sync(agent, "Summarize recent AI news.")
print(result.final_output)Migration steps
- Remove UserProxyAgent — the Runner handles orchestration.
- Redefine tools using @function_tool decorator instead of AutoGen's function_map.
- For multi-agent, replace GroupChatManager with Agent handoffs.
- Tracing is built-in — no need for a separate logging setup.
If your workflow maps naturally to specialized roles (researcher, writer, reviewer), CrewAI's role-playing model is more intuitive than LangChain's chain composition.
Before — LangChain
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.tools import tool
@tool
def search(query: str) -> str:
"""Search the web."""
return web_search(query)
agent1 = create_openai_tools_agent(llm, [search], research_prompt)
agent2 = create_openai_tools_agent(llm, [], write_prompt)
executor1 = AgentExecutor(agent=agent1, tools=[search])
research = executor1.invoke({"input": "AI trends 2026"})
# Pass research to agent2 manually...After — CrewAI
from crewai import Agent, Task, Crew
researcher = Agent(
role="Research Analyst",
goal="Find accurate and current information.",
backstory="Expert at web research and synthesis.",
tools=[search_tool],
)
writer = Agent(
role="Technical Writer",
goal="Write clear summaries from research.",
backstory="Specialist in making complex topics accessible.",
)
task1 = Task(description="Research AI trends in 2026.", agent=researcher)
task2 = Task(description="Write a summary from the research.", agent=writer)
crew = Crew(agents=[researcher, writer], tasks=[task1, task2])
result = crew.kickoff()Migration steps
- Convert LangChain tools to CrewAI tools using the @tool decorator or BaseTool.
- Map each AgentExecutor to a CrewAI Agent with a role, goal, and backstory.
- Define the workflow as sequential Tasks rather than chained executors.
- CrewAI handles inter-agent context passing automatically within a Crew.
LangChain→PydanticAI
Low effortIf you are frustrated by LangChain's dynamic typing and want validated, type-safe structured outputs, PydanticAI offers a simpler model with native Pydantic integration.
Before — LangChain
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import PydanticOutputParser
from pydantic import BaseModel
class Summary(BaseModel):
title: str
key_points: list[str]
llm = ChatOpenAI(model="gpt-4o")
parser = PydanticOutputParser(model=Summary)
chain = prompt | llm | parser
result = chain.invoke({"input": "Explain quantum computing."})After — PydanticAI
from pydantic import BaseModel
from pydantic_ai import Agent
class Summary(BaseModel):
title: str
key_points: list[str]
agent = Agent("openai:gpt-4o", result_type=Summary)
result = agent.run_sync("Explain quantum computing.")
print(result.data.title) # fully typed
print(result.data.key_points) # list[str]Migration steps
- Pass the Pydantic model as result_type — no parser chain needed.
- Tool definitions move to @agent.tool decorated functions.
- Dependencies (DB connections, API clients) use the dependency injection system instead of closures.
- PydanticAI supports the same LLM providers via its model string syntax.
Vercel AI SDK→Mastra
Medium effortIf you have outgrown Vercel AI SDK's single-agent streaming pattern and need built-in workflows, RAG pipelines, or structured agent orchestration while staying in TypeScript, Mastra gives you a more complete agent framework.
Before — Vercel AI SDK
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const { text, toolCalls } = await generateText({
model: openai("gpt-4o"),
system: "You are a helpful assistant.",
prompt: "What is the weather in Paris?",
tools: {
weather: {
description: "Get weather for a city",
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => getWeather(city),
},
},
});After — Mastra
import { Agent } from "@mastra/core";
const agent = new Agent({
name: "assistant",
instructions: "You are a helpful assistant.",
model: { provider: "OPEN_AI", name: "gpt-4o" },
tools: {
weather: {
description: "Get weather for a city",
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => getWeather(city),
},
},
});
const result = await agent.generate("What is the weather in Paris?");Migration steps
- Tool definitions are structurally similar -- most tools port with minimal changes.
- Replace generateText/streamText calls with agent.generate() or agent.stream().
- Add workflows for multi-step orchestration that previously required manual chaining.
- Mastra includes built-in RAG and evals that Vercel AI SDK does not provide.
LangChain (TS)→Vercel AI SDK
Low effortIf you are building a Next.js or React application and only need streaming tool-calling agents without LangChain's full abstraction layer, the Vercel AI SDK offers a lighter, more idiomatic TypeScript experience.
Before — LangChain (TS)
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const llm = new ChatOpenAI({ model: "gpt-4o" });
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant."],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const agent = createToolCallingAgent({ llm, tools, prompt });
const executor = new AgentExecutor({ agent, tools });
const result = await executor.invoke({ input: "Summarize AI news." });After — Vercel AI SDK
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const { text } = await generateText({
model: openai("gpt-4o"),
system: "You are a helpful assistant.",
prompt: "Summarize AI news.",
tools: {
search: {
description: "Search the web",
parameters: z.object({ query: z.string() }),
execute: async ({ query }) => webSearch(query),
},
},
maxSteps: 5,
});Migration steps
- Replace AgentExecutor with generateText or streamText -- no chain construction needed.
- Tools use Zod schemas instead of LangChain's StructuredTool class.
- Use maxSteps to control agent loops instead of maxIterations.
- For streaming to React, use the useChat hook instead of manual callback handlers.