246 RSVP'd
This workshop will teach participants the core components of GPT-style large language models and how to build one themselves. We will break down key concepts like attention mechanisms, tokenization, and transformer architectures in a hands-on, practical manner. By the end, attendees will have built a basic transformer model from scratch and understand how these systems power modern AI applications.
EPFL/Harvard LiGHT
AI Researcher
Contact Us