Lesson 1: AI Buzzword Bingo
- What AI is.
- How Machine Learning fits inside AI.
- The difference between traditional ML and Deep Learning.
- What Foundation Models are and where LLMs and Generative AI fit.
Let’s start by mapping out the key terms and seeing how they fit together.
Artificial Intelligence (AI)
AI is the master term, the big umbrella. It is about getting machines to do things that usually require human thinking - things like understanding language, recognising images, or making decisions. It’s been around for decades (yes, even before ChatGPT).
Machine Learning (ML)
Going one level deeper, Machine Learning is a branch of AI. Instead of programming every rule by hand, we give the computer data and let it learn patterns on its own. ML has a few main types:
- Supervised learning: train on labelled data (e.g. images tagged with "cat" or "dog").
- Unsupervised learning: find patterns in unlabelled data.
- Reinforcement learning: learn by trial and error, guided by rewards and penalties, where "rewards" push the system toward desirable outcomes and "penalties" discourage unwanted actions. For example, in a game, scoring points could be a reward, while losing a life could be a penalty.
Not all ML is the same. Some use simple methods (like linear regression or decision trees). Others use more complex structures like neural networks.
Deep Learning (DL)
Deep Learning is a subset of ML that uses neural networks with many layers (that’s the "deep" part). It’s especially good with large amounts of unstructured data like images, audio, and natural language.
But remember: deep learning isn’t always the best choice. Traditional ML methods can still be faster, simpler, and easier to explain.
Foundation Models
These are big neural networks trained on huge datasets. Instead of starting from scratch, you take one of these pre-trained models and fine-tune it for your specific task. This saves time, money, and often gives better results.
They’re trained to capture broad knowledge and can be adapted to tasks like translation, summarisation, code generation, or image recognition.
Large Language Models (LLMs)
A specific type of foundation model focused on text. Let's break down the term:
- Large: billions of parameters, which gives them a richer understanding.
- Language: trained to understand and generate human language.
- Model: the actual algorithms and parameters working together.
LLMs are not just text generators; they can act as an interface between people and complex systems. They can translate a natural-language request (e.g. "Find me all invoices over $1,000 from last quarter") into structured actions, such as database queries or API calls, making them useful for search, automation, analysis, and more.
Examples include OpenAI's GPT-5, Anthropic's Claude, LLaMA.
What are parameters?
Parameters are the numeric values inside a model: the weights and biases. During training, the system adjusts these numbers so its predictions match the training data a bit better each step. After training, the numbers stay fixed and the model uses them to turn input tokens into the next token, over and over, until it forms an answer.
A simple way to think about it is a wall of tiny dials. Each dial controls how much one signal affects another. With enough dials, the model can capture patterns in text such as grammar, style, and facts seen in data. More parameters mean more capacity to learn patterns, but they also raise memory use, latency, and cost. Bigger is not always better; data quality, prompt design, and how you add context often matter more.
Other Foundation Model types
- Vision models designed to process and generate images.
- Scientific models tasked with e.g. predicting protein structures.
- Audio models built to generate speech, music, or sound effects.
Generative AI (GenAI)
Finally, Generative AI refers to any AI system that can create new content: text, images, music, code, and more. It draws on the knowledge captured in foundation models to produce outputs that are new in form, even if inspired by existing data.
In simple terms: foundation models provide the intelligence, and generative AI turns that intelligence into something tangible.