Dear learner,
Meta’s new Llama 4 models (Maverick and Scout) introduced three big upgrades: multimodal reasoning with image grounding, million-token context windows, and a Mixture-of-Experts design that’s efficient to serve.
To make these capabilities practical, we partnered with Meta to bring you a new short course taught by Amit Sangani, Meta’s Director of Partner Engineering: Building with Llama 4.
In the course, you will learn to:
- Call Meta’s official Llama API and choose the right model for a task
- Build a 12-language translator chatbot
- Detect objects, draw bounding boxes, and turn a UI screenshot into executable code
- Ask questions over entire books, papers, or GitHub repos without chunking
- Improve system prompts automatically with the Prompt Optimization Tool
- Generate and curate training data using the Synthetic Data Kit
Whether you’re integrating an open model into production or validating a new idea, Llama 4 lets you prototype multimodal, million-token applications quickly—and this course shows you how.
We can’t wait to see what you build with Llama 4!
Lesson | Video | Code |
---|---|---|
Introduction | video | |
Overview of Llama 4 | video | |
Quickstart with Llama 4 and API | video | code |
Image Grounding | video | code |
Llama 4 Prompt Format | video | code |
Long-Context Understanding | video | code |
Prompt Optimization Tool | video | code |
Synthetic Data kit | video | code |
Conclusion | video |