Part 2: Agentic AI
Build autonomous AI agents with agno—from stateless bots to multi-agent teams with memory, tools, and knowledge bases.
What You'll Learn
This section takes you from a bare LLM API call to fully autonomous agents. You'll set up the infrastructure stack (PostgreSQL, Qdrant, Arize Phoenix), build a basic stateless agent, then progressively add capabilities: conversation history, persistent cross-session memory, native and MCP tools, RAG-powered knowledge bases, and multi-agent team coordination.
Lesson 8: Setup
Configure the development environment with Docker. Stand up PostgreSQL for persistent storage, Qdrant for vector search, and Arize Phoenix for observability and debugging. Understand why agents need more infrastructure than a simple API key.
Lesson 9: Basic Agent
Build the simplest possible agent: system prompt + user input → LLM → output. Learn what agno is and why we're using it, understand stateless design, and see exactly what's missing compared to a production chat system like ChatGPT.
Lesson 10: History
Add session-based conversation history using PostgreSQL. Support both ephemeral sessions (reset on restart) and persistent sessions (survive restarts). Understand context injection—what actually gets sent to the LLM on each turn.
Lesson 11: Memory
Give your agent persistent memory that spans all conversations for a user. Distinguish history (session-scoped) from memory (user-scoped). The LLM automatically identifies and stores memorable facts, which persist across sessions and script restarts.
Lesson 12: Tools
Extend agents with real-world capabilities by defining native Python tools and connecting external services via MCP (Model Context Protocol). Understand the tool-calling flow—how the LLM decides when and what to call.
Lesson 13: Knowledge
Ground agent responses in your own documents using RAG with Qdrant. Convert PDFs, Word docs, and URLs into chunked, searchable vector embeddings. Compare fixed-size vs semantic chunking strategies for retrieval quality.
Lesson 14: Teams
Combine multiple specialized agents into a coordinated team. Build specialist agents with focused roles, wire up a team leader for routing and coordination, and manage context sharing between agents for complex tasks.
Why This Matters
These seven lessons transform a simple LLM wrapper into a production-capable agent system. Without history, your agent can't hold a conversation. Without memory, it forgets users between sessions. Without tools, it can't act on the real world. Without knowledge, it hallucinates. Without teams, complex tasks require a single agent to do everything.
Complete Part 1 first—the embedding and RAG foundations are prerequisites for the knowledge lesson here.