Lesson 12: Tools
Give your agent the ability to take actions—call APIs, query databases, fetch live data.
- What Tools Are: Functions the LLM can decide to call.
- Native Tools: Python functions defined in your codebase.
- MCP Tools: External processes via Model Context Protocol.
- Tool Calling Flow: How the LLM decides when and what to call.
- Building Custom Tools: Creating your own toolkit.
Why Tools Matter
Without tools, an agent can only generate text based on its training data. It can't:
- Check current stock prices
- Query your database
- Call your internal APIs
- Look up today's weather
Tools bridge the gap between "knows things" and "does things." The LLM decides when a tool is needed based on user intent, generates the call with appropriate arguments, and incorporates the result into its response.
How Tool Calling Works
User: "What's Tesla's stock price?"
1. LLM receives message + list of available tools
2. LLM decides: "I need get_stock_price(symbol='TSLA')"
3. Agent executes the function, gets result: "$248.50"
4. Result goes back to LLM
5. LLM responds: "Tesla (TSLA) is currently trading at $248.50"
The LLM doesn't execute code—it generates a structured request saying which tool to call with what arguments. Your agent handles execution and returns results.
Two Types of Tools
| Type | Where It Runs | Use Case |
|---|---|---|
| Native | Same Python process | Custom logic, internal APIs, simple integrations |
| MCP | Separate process | Shared tools, different languages, isolation |
Both work the same from the LLM's perspective—it just sees a list of available functions.
Building Custom Tools
You can create your own toolkit by extending Toolkit. Here's one that queries the crm_demo database we set up in Lesson 8:
"""
CRM Toolkit for Agno
Queries the crm_demo PostgreSQL database (customers, orders).
Uses the same database as the MCP server — this is the native-tool approach.
Usage:
from tools.crm_tools import CRMTools
agent = Agent(tools=[CRMTools()])
"""
import json
import psycopg
from agno.tools import Toolkit
DB_URL = "postgresql://ai:ai@localhost:5532/crm_demo"
class CRMTools(Toolkit):
def __init__(self, db_url: str = DB_URL, **kwargs):
self.db_url = db_url
tools = [
self.get_customers,
self.get_customer,
self.get_customer_orders,
self.get_revenue_summary,
]
super().__init__(name="crm_tools", tools=tools, **kwargs)
def _query(self, sql: str, params: tuple = ()) -> list:
"""Execute a query and return all rows."""
conn = psycopg.connect(self.db_url)
cur = conn.cursor()
cur.execute(sql, params)
rows = cur.fetchall()
conn.close()
return rows
def get_customers(self) -> str:
"""List all customers with their company and industry.
Returns:
JSON list of customers [{name, company, industry}, ...].
"""
rows = self._query(
"SELECT name, company, industry FROM customers ORDER BY name"
)
customers = [
{"name": r[0], "company": r[1], "industry": r[2]} for r in rows
]
return json.dumps(customers)
def get_customer(self, name: str) -> str:
"""Look up a customer by name (partial match).
Args:
name: Full or partial customer name.
Returns:
JSON with customer details: name, email, company, industry.
"""
rows = self._query(
"SELECT name, email, company, industry FROM customers WHERE name ILIKE %s",
(f"%{name}%",),
)
if rows:
r = rows[0]
return json.dumps({"name": r[0], "email": r[1], "company": r[2], "industry": r[3]})
return json.dumps({"error": "Customer not found"})
def get_customer_orders(self, name: str) -> str:
"""Get all orders for a customer by name.
Args:
name: Full or partial customer name.
Returns:
JSON list of orders [{product, amount, status, date}, ...].
"""
rows = self._query(
"""
SELECT o.product, o.amount, o.status, o.created_at::text
FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE c.name ILIKE %s
ORDER BY o.created_at
""",
(f"%{name}%",),
)
orders = [
{"product": r[0], "amount": float(r[1]), "status": r[2], "date": r[3]}
for r in rows
]
return json.dumps(orders)
def get_revenue_summary(self) -> str:
"""Get total revenue grouped by order status.
Returns:
JSON list [{status, order_count, total_revenue}, ...].
"""
rows = self._query(
"""
SELECT status, COUNT(*), SUM(amount)
FROM orders
GROUP BY status
ORDER BY status
"""
)
summary = [
{"status": r[0], "order_count": r[1], "total_revenue": float(r[2])}
for r in rows
]
return json.dumps(summary)
Toolkit Pattern
The key elements:
- Extend
Toolkit: Base class that handles registration - Define methods with docstrings: The docstring becomes the tool description the LLM sees
- Type hints on arguments: Tell the LLM what parameters are expected
- Return strings: Results go back to the LLM as text
The LLM uses your docstrings to understand what each tool does and when to use it. Good descriptions = better tool selection.
Native Tools
Tools defined as Python functions in your codebase. Agno provides some built-in (like YFinance), and you can create your own.
"""
Lesson 12a: Tools (Native)
Agent can call external tools to take real-world actions. The LLM decides when
to call tools based on user intent, executes them, and incorporates results
into its response. Uses YFinance for stock data and CRMTools for customer queries.
Run: uv run 12-tools-native.py
Try: "Tesla stock price" | "List customers" | "Revenue summary"
Observe in Phoenix (http://localhost:6006):
- LLM requests tool call → tool executes → result returned → LLM responds
- Tool call arguments and responses visible in spans
- Multiple tool calls for comparison queries
Reset: uv run tools/reset_data.py
"""
import os
from dotenv import load_dotenv
from phoenix.otel import register
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.db.postgres import PostgresDb
from agno.tools.yfinance import YFinanceTools
from tools.crm_tools import CRMTools
load_dotenv()
register(project_name="12-tools-native", auto_instrument=True, batch=True, verbose=True)
db = PostgresDb(db_url="postgresql+psycopg://ai:ai@localhost:5532/ai")
agent = Agent(
name="Assistant",
model=OpenAIChat(id=os.getenv("OPENAI_MODEL_ID")),
instructions="You are a helpful assistant with access to stock data and a CRM database. Be concise.",
tools=[YFinanceTools(), CRMTools()],
db=db,
user_id="demo-user",
enable_user_memories=True,
add_history_to_context=True,
add_datetime_to_context=True,
num_history_runs=5,
markdown=True,
)
agent.cli_app(stream=True)
What's New
Multiple toolkits:
tools=[YFinanceTools(), CRMTools()],
Pass a list of tool instances. The agent extracts function schemas from all toolkits and sends them to the LLM with each request. The LLM picks the right tool based on user intent—stock questions go to YFinance, customer questions go to CRM.
Try It
uv run 12-tools-native.py
> What's Tesla's stock price?
Tesla (TSLA) is currently at $248.50, up 2.3% today.
> List all customers
Here are all customers:
- Alice Johnson (Acme Corp, Manufacturing)
- Bob Smith (Globex Inc, Technology)
- Carol White (Initech, Finance)
...
> What's the revenue breakdown?
Revenue by status:
- completed: 18 orders, $67,900
- pending: 5 orders, $28,700
- cancelled: 1 order, $2,500
> Remember that I'm interested in tech stocks
Got it, I'll remember you're interested in tech stocks.
Notice the agent still has memory from Lesson 11—it can remember your preferences while also using tools.
Notice the MCP server below exposes the same CRM data as CRMTools above. This is intentional—same capability, two approaches. Native tools run in-process; MCP tools run as a separate server. Compare the trade-offs as you work through both.
MCP Tools
Model Context Protocol (MCP) lets you run tools as separate processes. The agent connects to an MCP server, discovers available tools, and calls them over the protocol.
Why use MCP?
- Share tools across multiple agents or applications
- Different languages: MCP server can be Python, Node, Rust, etc.
- Isolation: Tool crashes don't crash your agent
- Existing ecosystem: Growing library of MCP servers
MCP Server
First, create a server that exposes tools. Here's one for our CRM database:
"""
MCP Server: CRM queries
Tools:
- get_customer: Get customer by name
- get_customer_orders: Get orders for a customer
- get_all_customers: List all customers
- get_customers_by_industry: Filter customers by industry
- get_orders_by_status: Filter orders by status
- get_revenue_summary: Revenue breakdown by status
"""
import psycopg
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("CRM")
def get_db():
return psycopg.connect("postgresql://ai:ai@localhost:5532/crm_demo")
@mcp.tool()
def get_customer(name: str) -> str:
"""Get customer details by name (partial match)."""
conn = get_db()
cur = conn.cursor()
cur.execute(
"SELECT id, name, email, company, industry FROM customers WHERE name ILIKE %s",
(f"%{name}%",)
)
row = cur.fetchone()
conn.close()
if row:
return f"Customer: {row[1]}, Email: {row[2]}, Company: {row[3]}, Industry: {row[4]}"
return "Customer not found"
@mcp.tool()
def get_customer_orders(name: str) -> str:
"""Get all orders for a customer by name."""
conn = get_db()
cur = conn.cursor()
cur.execute("""
SELECT o.product, o.amount, o.status, o.created_at
FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE c.name ILIKE %s
ORDER BY o.created_at
""", (f"%{name}%",))
rows = cur.fetchall()
conn.close()
if rows:
lines = [f"- {r[0]}: ${r[1]} ({r[2]}, {r[3]})" for r in rows]
return f"Orders:\n" + "\n".join(lines)
return "No orders found"
@mcp.tool()
def get_all_customers() -> str:
"""List all customers."""
conn = get_db()
cur = conn.cursor()
cur.execute("SELECT name, company, industry FROM customers ORDER BY name")
rows = cur.fetchall()
conn.close()
return "\n".join([f"- {r[0]} ({r[1]}, {r[2]})" for r in rows])
@mcp.tool()
def get_customers_by_industry(industry: str) -> str:
"""Get all customers in a specific industry."""
conn = get_db()
cur = conn.cursor()
cur.execute(
"SELECT name, company, email FROM customers WHERE industry ILIKE %s ORDER BY name",
(f"%{industry}%",)
)
rows = cur.fetchall()
conn.close()
if rows:
return "\n".join([f"- {r[0]} ({r[1]}) - {r[2]}" for r in rows])
return "No customers found in that industry"
@mcp.tool()
def get_orders_by_status(status: str) -> str:
"""Get all orders with a specific status (pending, completed, cancelled, refunded)."""
conn = get_db()
cur = conn.cursor()
cur.execute("""
SELECT c.name, o.product, o.amount, o.created_at
FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE o.status ILIKE %s
ORDER BY o.created_at DESC
""", (f"%{status}%",))
rows = cur.fetchall()
conn.close()
if rows:
return "\n".join([f"- {r[0]}: {r[1]} (${r[2]}, {r[3]})" for r in rows])
return "No orders found with that status"
@mcp.tool()
def get_revenue_summary() -> str:
"""Get total revenue summary by status."""
conn = get_db()
cur = conn.cursor()
cur.execute("""
SELECT status, COUNT(*), SUM(amount)
FROM orders
GROUP BY status
ORDER BY status
""")
rows = cur.fetchall()
conn.close()
return "\n".join([f"- {r[0]}: {r[1]} orders, ${r[2]}" for r in rows])
if __name__ == "__main__":
mcp.run()
The @mcp.tool() decorator exposes functions to MCP clients. Same pattern as native tools—docstrings describe functionality, type hints define parameters.
MCP Client (Agent)
Now connect an agent to the MCP server:
"""
Lesson 12b: Tools (MCP)
Agent connects to external MCP (Model Context Protocol) server for tools. Same
concept as 12a, but tools are served from a separate process. This enables tool
reuse across agents and languages. Uses CRM database for customer/order data.
Run: uv run 12-tools-mcp.py
Try: "List customers" | "Show pending orders" | "Revenue by customer"
Observe in Phoenix (http://localhost:6006):
- MCP tool discovery and registration
- Tool calls routed to external tools/mcp_crm_server.py
- SQL queries executed against crm_demo database
Reset: uv run tools/reset_data.py
"""
import os
import asyncio
from dotenv import load_dotenv
from phoenix.otel import register
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.db.postgres import PostgresDb
from agno.tools.mcp import MCPTools
load_dotenv()
register(project_name="12-tools-mcp", auto_instrument=True, batch=True, verbose=True)
db = PostgresDb(db_url="postgresql+psycopg://ai:ai@localhost:5532/ai")
async def main():
mcp_tools = MCPTools(command="uv run tools/mcp_crm_server.py")
await mcp_tools.connect()
agent = Agent(
name="CRM Assistant",
model=OpenAIChat(id=os.getenv("OPENAI_MODEL_ID")),
instructions="You are a CRM assistant. Help users query customer and order data.",
tools=[mcp_tools],
db=db,
user_id="demo-user",
enable_user_memories=True,
add_history_to_context=True,
num_history_runs=5,
markdown=True,
)
await agent.acli_app(stream=True)
await mcp_tools.close()
asyncio.run(main())
What's Different
MCP connection:
mcp_tools = MCPTools(command="uv run tools/mcp_crm_server.py")
await mcp_tools.connect()
Agno spawns the MCP server as a subprocess and connects via stdio. On connect, it discovers all available tools.
Async pattern:
async def main():
...
await agent.acli_app(stream=True)
await mcp_tools.close()
asyncio.run(main())
MCP communication is async, so we use acli_app instead of cli_app.
Try It
uv run 12-tools-mcp.py
> List all customers
Here are all customers:
- Alice Johnson (Acme Corp, Manufacturing)
- Bob Smith (Globex Inc, Technology)
- Carol White (Initech, Finance)
...
> Show pending orders
Pending orders:
- Bob Smith: Support Package ($1200, 2024-09-01)
- David Lee: Professional License ($2000, 2024-09-05)
...
> What's the revenue breakdown?
Revenue by status:
- completed: 18 orders, $67,900
- pending: 5 orders, $28,700
- cancelled: 1 order, $2,500
- refunded: 1 order, $1,200
Observe in Phoenix
Open http://localhost:6006 and look at traces for 12-tools-native or 12-tools-mcp.
You'll see a different trace structure than previous lessons. Here's what a single tool-calling request looks like:
Stock_Assistant.run 3.7s
├── OpenAIChat.invoke_stream 728 tokens 1.1s ← LLM decides to call a tool
├── get_current_stock_price 856ms ← Tool executes
└── OpenAIChat.invoke_stream 762 tokens 1s ← LLM responds with the result
Notice three spans instead of one:
-
First LLM call (1.1s): The model receives your question plus the list of available tools. It doesn't answer directly—instead it returns a structured tool call request:
{"symbol": "BTC-USD"}. -
Tool execution (856ms): Your Python function runs with those arguments. Click this span to see the input (
{"symbol": "BTC-USD"}) and output (82631.59). The tool description and parameters are also visible—this is what the LLM used to decide how to call it. -
Second LLM call (1s): The tool result goes back to the model. Now it has the data it needs and generates the final human-readable response.
This is the core tool-calling loop: LLM decides → tool runs → LLM responds. The model never executes code itself—it only generates the request, your agent handles execution.
For queries like "Compare AAPL and MSFT," you'll see multiple tool calls between the two LLM spans—the model requests both stock lookups at once.
Native vs MCP: When to Use Which
Use Native Tools when:
- Tools are specific to this agent
- You want simplicity (no separate process)
- Performance is critical (no IPC overhead)
- You're prototyping quickly
Use MCP Tools when:
- Multiple agents share the same tools
- Tools are maintained by a different team
- You want language flexibility
- You need process isolation
You can mix both in the same agent—just pass multiple items to tools=[].
Key Concepts
| Concept | This Lesson |
|---|---|
| Tool | Function the LLM can decide to call |
| Native | Python function in your codebase |
| MCP | External process via Model Context Protocol |
| Toolkit | Collection of related tools |
| Tool schema | Description + parameters sent to LLM |
What's Next
Tools let agents take actions. In Lesson 13, we add knowledge—the agent will retrieve information from your documents using RAG and vector search.