CrewAI and AutoGen
Overview
CrewAI and AutoGen are two frameworks focused on multi-agent collaboration. CrewAI adopts a role-playing paradigm, while AutoGen uses a conversable agent paradigm. Each has its strengths and suits different scenarios.
CrewAI
Core Philosophy
CrewAI's design is inspired by real-world team collaboration---each agent plays a specific role, working together to complete tasks.
graph TD
subgraph CrewAI
A[Crew] --> B[Agent 1<br/>Researcher]
A --> C[Agent 2<br/>Writer]
A --> D[Agent 3<br/>Reviewer]
B --> E[Task 1<br/>Research Topic]
C --> F[Task 2<br/>Write Article]
D --> G[Task 3<br/>Review Quality]
E --> F
F --> G
end
B --> B1[Tools: Search/RAG]
C --> C1[Tools: Writing Aids]
D --> D1[Tools: Quality Check]
Agent Definition
from crewai import Agent, Task, Crew, Process
researcher = Agent(
role="Senior Research Analyst",
goal="Discover and analyze the latest trends in AI",
backstory="""You are a senior research analyst at a leading
tech think tank. You have a knack for identifying emerging
trends and extracting actionable insights from data.""",
tools=[search_tool, scrape_tool],
llm="gpt-4o",
verbose=True,
allow_delegation=True,
)
writer = Agent(
role="Technical Content Writer",
goal="Create compelling and accurate technical content",
backstory="""You are a skilled technical writer known for
making complex topics accessible. You excel at transforming
research into engaging articles.""",
tools=[],
llm="gpt-4o",
verbose=True,
)
reviewer = Agent(
role="Quality Assurance Editor",
goal="Ensure content accuracy and quality",
backstory="""You are a meticulous editor with expertise in
technical content. You ensure factual accuracy, readability,
and adherence to style guidelines.""",
tools=[fact_check_tool],
llm="gpt-4o",
verbose=True,
)
Task Definition
research_task = Task(
description="""Research the latest developments in AI agents.
Focus on:
1. New frameworks and tools
2. Research breakthroughs
3. Industry adoption trends
Provide a comprehensive research report with sources.""",
expected_output="A detailed research report with key findings and sources",
agent=researcher,
)
writing_task = Task(
description="""Based on the research report, write a 1500-word
article about AI agents. The article should be:
- Engaging and accessible
- Technically accurate
- Well-structured with clear sections""",
expected_output="A polished 1500-word article in markdown format",
agent=writer,
context=[research_task],
)
review_task = Task(
description="""Review the article for:
1. Factual accuracy
2. Readability and flow
3. Technical depth
4. Grammar and style
Provide specific feedback and a final revised version.""",
expected_output="Reviewed and revised article with feedback notes",
agent=reviewer,
context=[writing_task],
)
Process Types
# Sequential execution
crew = Crew(
agents=[researcher, writer, reviewer],
tasks=[research_task, writing_task, review_task],
process=Process.sequential,
verbose=True,
)
# Hierarchical execution (requires Manager Agent)
crew = Crew(
agents=[researcher, writer, reviewer],
tasks=[research_task, writing_task, review_task],
process=Process.hierarchical,
manager_llm="gpt-4o",
verbose=True,
)
result = crew.kickoff()
Advanced Features
Memory System
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder={
"provider": "openai",
"config": {"model": "text-embedding-3-small"}
}
)
# Supports short-term memory (current task), long-term memory (cross-task), entity memory (entity knowledge)
AutoGen
Core Philosophy
AutoGen (Microsoft) is based on the Conversable Agent paradigm, where agents collaborate through conversation.
graph TD
subgraph AutoGen
A[UserProxy<br/>User Proxy] <-->|Dialogue| B[Assistant<br/>AI Assistant]
B <-->|Dialogue| C[Coder]
A <-->|Dialogue| C
D[GroupChat Manager]
D --> A
D --> B
D --> C
end
ConversableAgent
import autogen
config_list = [{"model": "gpt-4o", "api_key": "..."}]
llm_config = {"config_list": config_list, "temperature": 0}
assistant = autogen.AssistantAgent(
name="assistant",
system_message="""You are a helpful AI assistant.
Solve tasks using your coding and language skills.
When you need to execute code, write it in python code blocks.""",
llm_config=llm_config,
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "coding",
"use_docker": False,
},
)
user_proxy.initiate_chat(
assistant,
message="Write a Python script to analyze the top 10 most popular programming languages."
)
GroupChat
coder = autogen.AssistantAgent(
name="Coder",
system_message="You are a Python expert. Write clean, efficient code.",
llm_config=llm_config,
)
reviewer = autogen.AssistantAgent(
name="Reviewer",
system_message="""You review code for bugs, efficiency, and best practices.
Be specific in your feedback.""",
llm_config=llm_config,
)
tester = autogen.AssistantAgent(
name="Tester",
system_message="You write comprehensive test cases for the given code.",
llm_config=llm_config,
)
groupchat = autogen.GroupChat(
agents=[user_proxy, coder, reviewer, tester],
messages=[],
max_round=12,
speaker_selection_method="auto",
)
manager = autogen.GroupChatManager(
groupchat=groupchat,
llm_config=llm_config,
)
user_proxy.initiate_chat(
manager,
message="Create a REST API for a todo list application with proper error handling."
)
Code Execution
AutoGen's core capability is safely executing code:
user_proxy = autogen.UserProxyAgent(
name="executor",
code_execution_config={
"work_dir": "workspace",
"use_docker": True,
"timeout": 60,
"last_n_messages": 3,
},
)
# Workflow:
# 1. Assistant generates code
# 2. UserProxy executes in Docker
# 3. Execution results fed back to Assistant
# 4. Assistant modifies or continues based on results
Custom Speaker Selection
def custom_speaker_selection(last_speaker, groupchat):
"""Custom speaking order"""
messages = groupchat.messages
if last_speaker.name == "Coder":
return reviewer
elif last_speaker.name == "Reviewer":
if "LGTM" in messages[-1]["content"]:
return tester
else:
return coder
elif last_speaker.name == "Tester":
if "all tests passed" in messages[-1]["content"].lower():
return user_proxy
else:
return coder
return coder
groupchat = autogen.GroupChat(
agents=[user_proxy, coder, reviewer, tester],
messages=[],
speaker_selection_method=custom_speaker_selection,
)
CrewAI vs AutoGen Comparison
| Dimension | CrewAI | AutoGen |
|---|---|---|
| Paradigm | Role-playing + task-driven | Conversation-driven |
| Definition | Agent + Task + Crew | Agent + Chat |
| Flow control | Sequential / hierarchical | Free conversation / custom |
| Code execution | Via tools | Native Docker isolation |
| Memory | Built-in multi-layer memory | Via extensions |
| Learning curve | Gentle | Medium |
| Flexibility | Medium | High |
| Best for | Workflows with clear role division | Exploratory / research tasks |
| Community | Rapidly growing | Stable |
| Maintainer | CrewAI Inc. | Microsoft Research |
When to Choose CrewAI
- Roles and tasks are clearly defined
- Predictable workflow is needed
- Team collaboration analogy is intuitive
- Rapid prototyping
When to Choose AutoGen
- Flexible conversational interaction is needed
- Involves code generation and execution
- Exploratory / research tasks
- Need human-in-the-loop
Summary
| Need | Choice | Reason |
|---|---|---|
| Structured team workflow | CrewAI | Clear role and task definition |
| Flexible multi-agent dialogue | AutoGen | Conversational paradigm is more flexible |
| Rapid prototyping | CrewAI | Simpler API |
| Code generation + execution | AutoGen | Docker-isolated code execution |
| Teachable agents | AutoGen | Built-in Teachable Agent |
| Production deployment | CrewAI | More predictable behavior |
Both frameworks are evolving rapidly. Choose based on specific requirements and follow the latest developments in each community.