Lectures
- Lecture 1: Introduction
-
Lecture 2: Overview of Statistical Learning
Reading Recourse: Chapter 5 of deeplearningbook
- Lecture 3
- Lecture 4: Linear Classification
-
Lecture 5: Multilayer perceptron (Feedforward DNN)
Reading Resource: Chapter 6.1, 6.2, 6.3 of deeplearningbook
-
Lecture 8: Backpropagation
Reading Resource: Chapter 6.5 of deeplearningbook
-
Lecture 9: Optimization
Reading Resource: Chapter 8.1,8.3,8.5 of deeplearningbook
-
Lab: A simple Model for MNIST
-
Lecture 10: Convolutional Neural Networks
Reading Resource: Chapter 9 of deeplearningbook
-
Lecture 11: Training DNN
Reading Resource: Chapter 7.12, 8.4, 8.7 of deeplearningbook
-
Lecture 12: Word Embedding
Reading Resource: Chapter 6.8 of [Speech and Language Processing. Daniel Jurafsky & James H. Martin.](https://web.stanford.edu/~jurafsky/slp3/)
-
Lecture 13: Recurrent Neural Network
Reading Resource: Chapter 9 of [Speech and Language Processing. Daniel Jurafsky & James H. Martin.](https://web.stanford.edu/~jurafsky/slp3/)
-
Lecture 14: LSTM Networks
Reading Resource: Chapter 9 of [Speech and Language Processing. Daniel Jurafsky & James H. Martin.](https://web.stanford.edu/~jurafsky/slp3/)
-
Lecture 15: Attention and Machine Translation
Reading Resource: Chapter 13 of [Speech and Language Processing. Daniel Jurafsky & James H. Martin.](https://web.stanford.edu/~jurafsky/slp3/)
-
Lecture 16: Transformers
Reading Resource: Chapter 10 of [Speech and Language Processing. Daniel Jurafsky & James H. Martin.](https://web.stanford.edu/~jurafsky/slp3/)
-
Lecture 17: Pretrained Models:BERT, GPT
-
Lecture 18: Neural Tangent Kernel
-
Lecture 19: Diffusion Models
Reading Resource:Score-Based Generative Modeling through Stochastic Differential Equations
-
Lecture 20: Graph Neural Networks
-
Lecture 21: Information Theory of Neural Networks
-
Lecture 23: Generative Adversarial Network
- Review notes