This course explores of Large Language Models (LLMs), from their fundamental principles to cutting-edge research directions. We aim to discuss the design and future of AI systems through lecture content and student-led presentations.
The course is structured around topics such as transformer architectures, empirical behaviors, training paradigms, and safety considerations. Students will also explore emerging challenges and the broader implications of AI technologies.
- Introduce fundamental concepts in AI and LLMs.
- Discuss architecture and principles of LLMs, including transformers.
- Explore topics in LLMs and modern AI systems such as training paradigms (pre-training, post-training, alignment), inference/test-time computations, embeddings/representations, evaluations, capabilities, safety/security (jailbreaking, oversight, hallucinations, uncertainty), interpretability (circuits).
- Student presentations on key research papers and recent breakthroughs.
- Course Syllabus. Note: the official course title is "Sem In Adv Appl Of Stat: Advances In Artificial Intelligence"; however we will use the unofficial title for our purposes.
- Lecture Notes; Note: these are work in progress.
- What is AI? Definitions and Goals
- Historical Overview of Artificial Intelligence
- The Challenge of AGI and Feasibility of AI in Daily Tasks
LLM Architectures (Lec 03)
- Input/Output Processing in AI Systems
- Transformer Mechanisms and Attention
- Key Architecture Details: Positional Encoding, Faster Attention
- Variations Across Model Architectures (e.g., GPT, Llama)
- Empirical Behavior: Scaling Laws, Emergence
- Extensions: Vision and Multimodal Language Models
- Pre-Training Paradigms
- Post-Training: Fine-Tuning and Instruction Tuning
- Alignment: Reward Learning and Reinforcement Learning from Human Feedback (RLHF)
- Simple and Advanced Sampling Methods
- Prompting, Chain-of-Thought, and Tree-of-Thought
- Reasoning
- Jailbreaking and Oversight Mechanisms
- Addressing Hallucinations in AI Systems
- Ensuring Robustness and Security
- Embeddings and Representations
- Transformer Circuits
After initial lectures, students will lead presentations on topics of their choice in recent advances or research questions in AI.
- Foundations of Large Language Models, U of Michigan, 2024
- Language Modeling from Scratch, Stanford, Spring 2024
- Recent Advances on Foundation Models, U of Waterloo, Winter 2024
- Large Models, U of Toronto, Winter 2025
- Andrej Karpathy's Neural Networks: Zero to Hero video lectures. 100% coding-based, hands-on tutorial on implementing basic autodiff, neural nets, language models, and GPT-2 mini (124M params).
- The Llama 3 Herd of Models describes the Llama 2 "open LLM" developed by Meta. Possibly the highest information content anywhere about LLMs.
- The corresponding sections in the Understanding Deep Learning book. See also the associated tutorial posts: LLMs; Transformers 1, 2, 3; Training and fine-tuning; Inference
- Foundations of Large Language Models book