Lesson 5: Prompt vs. Context Engineering
- Prompt Engineering: The craft of wording instructions.
- Context Engineering: The system-level discipline of assembling data, tools, and memory.
- Agentic Components: Memory, state management, hybrid RAG, and tools.
Most people know Prompt Engineering: crafting the input text to steer a model. But Context Engineering is the broader discipline: programmatically assembling everything the LLM needs during inference—prompts, retrieved documents, memory, and tools—to deliver accurate responses.
1. Prompt Engineering Techniques
This is part art, part science. Key techniques include:
- Role Assignment: Telling the LLM who to be (e.g., "You are a senior Python developer"). The model adopts the vocabulary and concerns of that persona.
- Few-Shot Examples: "Show, don't just tell." Providing 2-3 input/output pairs helps the model grasp the exact format style (e.g., specific JSON fields).
- Chain of Thought (CoT): Asking the model to "explain your reasoning" or "think step-by-step." This prevents jumping to conclusions on complex tasks.
- Constraint Setting: Explicitly defining boundaries (e.g., "Limit response to 100 words" or "Only use provided context") to prevent tangents.
2. Context Engineering Components
Context engineering builds dynamic, agentic systems that orchestrate the environment.
Memory
- Short-term: Summarizing long conversations to stay within the context window.
- Long-term: Using Vector Databases to retrieve user preferences, past trips, or learned patterns.
State Management
Tracks where the agent is in a multi-step process (e.g., Flight booked? -> Yes. Hotel booked? -> Pending). This ensures context isn't lost mid-task.
RAG (Retrieval Augmented Generation)
Connects agents to dynamic knowledge sources.
- Uses Hybrid Search: Combines semantic (vector) and keyword matching.
- Contextual Slicing: Instead of dumping an entire "Travel Policy" document into the prompt, it extracts only the relevant sections (e.g., expense limits for Paris) to avoid noise.
Tools
LLMs can't natively check databases or call APIs. Tools bridge that gap.
- Function: Queries SQL databases, fetches live prices, or deploys infrastructure.
- Definition: Context engineering involves defining clear tool descriptions so the LLM knows when and how to use them.
The Sythesis
- Prompt Engineering gives you better questions.
- Context Engineering gives you better systems.
A final production prompt might be 20% static instructions (the question) and 80% dynamic content (populated from memory, state, and RAG).