Lesson 14: Teams
Combine multiple specialized agents into a coordinated team that handles complex tasks.
- Why Teams: When a single agent isn't enough.
- Specialist Agents: Focused expertise, clear roles.
- Team Leader: Routing and coordination logic.
- Context Sharing: How agents pass information between themselves.
- Delegation Patterns: Router, parallel, and sequential workflows.
Why Teams?
A single agent with many tools and instructions gets unwieldy. Instructions conflict, tool lists grow long, and the agent struggles to decide what to do.
Teams solve this by decomposition:
- Each agent specializes in one thing
- A leader decides who handles what
- Specialists can build on each other's work
Think of it like a company: you don't ask the CEO to write code, design the logo, and handle payroll. You have specialists, and management coordinates.
The Team Pattern
User: "Research quantum computing and write a summary"
│
▼
┌─────────────┐
│ Team Leader │ ← Analyzes request, plans delegation
└─────────────┘
│
├──────────────────┐
▼ ▼
┌──────────┐ ┌─────────┐
│Researcher│ → │ Writer │ ← Sequential: Writer sees Researcher's output
└──────────┘ └─────────┘
│ │
└──────────────────┘
│
▼
Final Response
The leader doesn't do the work—it orchestrates. Each specialist contributes their part.
Building a Team
"""
Lesson 14: Teams (Delegation)
Multiple specialized agents coordinated by a team leader. The leader analyzes
requests and delegates to the appropriate member. Members can share context
via share_member_interactions for multi-step tasks.
Run: uv run 14-teams.py
Try: "Write a poem about AI" | "Explain quantum computing" | "Research and summarize blockchain"
Observe in Phoenix (http://localhost:6006):
- Team leader span containing member spans
- Delegation decisions and routing
- Context sharing between sequential member calls
Reset: uv run tools/reset_data.py
"""
import os
from dotenv import load_dotenv
from phoenix.otel import register
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.db.postgres import PostgresDb
from agno.team.team import Team
load_dotenv()
register(project_name="14-teams", auto_instrument=True, batch=True, verbose=True)
db = PostgresDb(db_url="postgresql+psycopg://ai:ai@localhost:5532/ai")
researcher = Agent(
name="Researcher",
role="Research topics and provide detailed, factual information.",
model=OpenAIChat(id=os.getenv("OPENAI_MODEL_ID")),
)
writer = Agent(
name="Writer",
role="Write creative content: poems, stories, summaries.",
model=OpenAIChat(id=os.getenv("OPENAI_MODEL_ID")),
)
team = Team(
name="Research & Writing Team",
model=OpenAIChat(id=os.getenv("OPENAI_MODEL_ID")),
members=[researcher, writer],
instructions=[
"Delegate research/explanation tasks to Researcher.",
"Delegate creative writing tasks to Writer.",
"For 'research and summarize' tasks, first use Researcher, then Writer.",
],
db=db,
user_id="demo-user",
enable_user_memories=True,
add_history_to_context=True,
num_history_runs=5,
share_member_interactions=True,
markdown=True,
show_members_responses=True,
)
team.cli_app(stream=True)
Breaking It Down
Specialist agents:
researcher = Agent(
name="Researcher",
role="Research topics and provide detailed, factual information.",
model=OpenAIChat(id=os.getenv("OPENAI_MODEL_ID")),
)
writer = Agent(
name="Writer",
role="Write creative content: poems, stories, summaries.",
model=OpenAIChat(id=os.getenv("OPENAI_MODEL_ID")),
)
Each specialist has:
name: How the leader refers to themrole: What they're good at (the leader uses this to decide delegation)model: Can be different models—use cheaper/faster for simple tasks, smarter for complex ones
Note these are minimal agents. In a real system, you'd add tools, knowledge, or specific instructions to each specialist.
Team configuration:
team = Team(
name="Research & Writing Team",
model=OpenAIChat(id=os.getenv("OPENAI_MODEL_ID")),
members=[researcher, writer],
instructions=[
"Delegate research/explanation tasks to Researcher.",
"Delegate creative writing tasks to Writer.",
"For 'research and summarize' tasks, first use Researcher, then Writer.",
],
...
)
members: List of available specialistsinstructions: Routing rules for the leadermodel: The leader's model (often the smartest, since it makes decisions)
Context sharing:
share_member_interactions=True,
This is crucial for multi-step tasks. When enabled, each specialist sees what previous specialists produced. So when you say "research and summarize blockchain":
- Researcher produces detailed findings
- Writer receives those findings as context
- Writer creates a summary based on real content, not imagination
Without this, Writer would have to make things up.
Debug visibility:
show_members_responses=True,
Shows each specialist's response in the CLI output. Useful for understanding delegation flow.
Try It
uv run 14-teams.py
Single specialist tasks:
> Write a poem about AI
[Writer]
Silicon dreams in circuits flow,
A mind that learns, yet doesn't know...
> Explain quantum computing
[Researcher]
Quantum computing uses quantum mechanical phenomena—superposition and
entanglement—to process information. Unlike classical bits...
Multi-step task:
> Research blockchain technology and write a simple summary
[Researcher]
Blockchain is a distributed ledger technology that maintains a continuously
growing list of records (blocks) linked using cryptography. Key concepts:
- Decentralization: No single point of control
- Immutability: Once recorded, data cannot be altered
- Consensus mechanisms: How nodes agree (PoW, PoS)
- Smart contracts: Self-executing code on the blockchain
...
[Writer]
**Blockchain in Plain English**
Imagine a shared notebook that everyone can read but no one can erase.
That's blockchain—a way for people to record information that stays
permanent and visible to all. It powers cryptocurrencies but also supply
chains, voting systems, and more.
The leader routed to Researcher first, then passed those findings to Writer for a friendlier summary.
Delegation Patterns
The leader can route work in different ways:
Router (Single Specialist)
For tasks with clear ownership:
"Write a poem" → Writer
"Explain photosynthesis" → Researcher
One specialist handles the entire request.
Sequential (Chain)
For multi-step workflows:
"Research X and summarize" → Researcher → Writer
Each specialist's output feeds the next.
Parallel (Future)
For independent subtasks that can run simultaneously:
"Compare Python and Rust" → [Researcher: Python] + [Researcher: Rust] → combine
Agno's current Team implementation focuses on sequential. For parallel execution, you'd orchestrate multiple agent calls yourself.
Observe in Phoenix
Open http://localhost:6006 and look at traces for 14-teams.
You'll see nested spans:
- Team span: The overall request
- Leader decision: Which specialists to invoke and in what order
- Member spans: Each specialist's execution (nested inside team span)
- LLM calls: Individual model invocations within each member
For "research and summarize" tasks, you'll see:
- Leader → Researcher → (Researcher's LLM call) → Writer → (Writer's LLM call with Researcher's context)
This visibility helps debug routing issues—when the wrong specialist handles something, you can see why.
When to Use Teams
Good fits:
- Tasks that naturally decompose (research → write → edit)
- Different expertise areas (code generation vs code review)
- Quality checks (generator + validator)
- Complex workflows with clear handoffs
Overkill for:
- Simple single-purpose agents
- Tasks where one agent with tools suffices
- Low-latency requirements (each delegation adds LLM calls)
Teams add coordination overhead. Use them when the complexity justifies it.
Extending the Pattern
This lesson shows a minimal team. Real systems might add:
Specialists with tools:
researcher = Agent(
name="Researcher",
role="Research using web search and knowledge base.",
tools=[WebSearchTools(), knowledge],
...
)
Specialists with different models:
# Cheap model for simple tasks
editor = Agent(
name="Editor",
model=OpenAIChat(id="gpt-4o-mini"),
...
)
# Expensive model for complex reasoning
analyst = Agent(
name="Analyst",
model=OpenAIChat(id="gpt-4o"),
...
)
Conditional routing:
instructions=[
"If the user mentions 'urgent', route to FastResponder.",
"If the task requires accuracy, route to Analyst.",
"For creative tasks, use Writer.",
]
Key Concepts
| Concept | This Lesson |
|---|---|
| Team | Coordinator for multiple agents |
| Leader | Decides routing and orchestration |
| Member | Specialist agent with focused role |
| Delegation | Leader assigns work to members |
| share_member_interactions | Members see each other's outputs |
What's Next
You've built the complete agent stack:
- Basic agent (Lesson 9): Stateless LLM wrapper
- History (Lesson 10): Session-based conversation memory
- Memory (Lesson 11): Cross-session user facts
- Tools (Lesson 12): Real-world actions
- Knowledge (Lesson 13): RAG from your documents
- Teams (Lesson 14): Multi-agent coordination
From here, explore:
- Adding observability dashboards for production monitoring
- Building more complex team hierarchies
- Combining tools, knowledge, and memory within specialists
- Implementing human-in-the-loop approval workflows
The patterns from these lessons scale to production systems. Start simple, add complexity only when needed, and always trace what's happening in Phoenix.