diff --git a/_config.yml b/_config.yml index c9e599b5..19987220 100644 --- a/_config.yml +++ b/_config.yml @@ -19,6 +19,6 @@ kramdown: syntax_highlighter: rouge # links to homeworks -hw_1_colab: https://cs231n.github.io/assignments/2022/assignment1_colab.zip -hw_2_colab: https://cs231n.github.io/assignments/2022/assignment2_colab.zip -hw_3_colab: https://cs231n.github.io/assignments/2022/assignment3_colab.zip +hw_1_colab: https://cs231n.github.io/assignments/2023/assignment1_colab.zip +hw_2_colab: https://cs231n.github.io/assignments/2023/assignment2_colab.zip +hw_3_colab: https://cs231n.github.io/assignments/2023/assignment3_colab.zip diff --git a/assignments/2023/assignment1.md b/assignments/2023/assignment1.md new file mode 100644 index 00000000..373a6acc --- /dev/null +++ b/assignments/2023/assignment1.md @@ -0,0 +1,85 @@ +--- +layout: page +title: Assignment 1 +mathjax: true +permalink: /assignments2023/assignment1/ +--- + +This assignment is due on **Friday, April 21 2023** at 11:59pm PST. + +Starter code containing Colab notebooks can be [downloaded here]({{site.hw_1_colab}}). + +- [Setup](#setup) +- [Goals](#goals) +- [Q1: k-Nearest Neighbor classifier](#q1-k-nearest-neighbor-classifier) +- [Q2: Training a Support Vector Machine](#q2-training-a-support-vector-machine) +- [Q3: Implement a Softmax classifier](#q3-implement-a-softmax-classifier) +- [Q4: Two-Layer Neural Network](#q4-two-layer-neural-network) +- [Q5: Higher Level Representations: Image Features](#q5-higher-level-representations-image-features) +- [Submitting your work](#submitting-your-work) + +### Setup + +Please familiarize yourself with the [recommended workflow]({{site.baseurl}}/setup-instructions/#working-remotely-on-google-colaboratory) before starting the assignment. You should also watch the Colab walkthrough tutorial below. + + + +**Note**. Ensure you are periodically saving your notebook (`File -> Save`) so that you don't lose your progress if you step away from the assignment and the Colab VM disconnects. + +Once you have completed all Colab notebooks **except `collect_submission.ipynb`**, proceed to the [submission instructions](#submitting-your-work). + +### Goals + +In this assignment you will practice putting together a simple image classification pipeline based on the k-Nearest Neighbor or the SVM/Softmax classifier. The goals of this assignment are as follows: + +- Understand the basic **Image Classification pipeline** and the data-driven approach (train/predict stages). +- Understand the train/val/test **splits** and the use of validation data for **hyperparameter tuning**. +- Develop proficiency in writing efficient **vectorized** code with numpy. +- Implement and apply a k-Nearest Neighbor (**kNN**) classifier. +- Implement and apply a Multiclass Support Vector Machine (**SVM**) classifier. +- Implement and apply a **Softmax** classifier. +- Implement and apply a **Two layer neural network** classifier. +- Understand the differences and tradeoffs between these classifiers. +- Get a basic understanding of performance improvements from using **higher-level representations** as opposed to raw pixels, e.g. color histograms, Histogram of Oriented Gradient (HOG) features, etc. + +### Q1: k-Nearest Neighbor classifier + +The notebook **knn.ipynb** will walk you through implementing the kNN classifier. + +### Q2: Training a Support Vector Machine + +The notebook **svm.ipynb** will walk you through implementing the SVM classifier. + +### Q3: Implement a Softmax classifier + +The notebook **softmax.ipynb** will walk you through implementing the Softmax classifier. + +### Q4: Two-Layer Neural Network + +The notebook **two\_layer\_net.ipynb** will walk you through the implementation of a two-layer neural network classifier. + +### Q5: Higher Level Representations: Image Features + +The notebook **features.ipynb** will examine the improvements gained by using higher-level representations +as opposed to using raw pixel values. + +### Submitting your work + +**Important**. Please make sure that the submitted notebooks have been run and the cell outputs are visible. + +Once you have completed all notebooks and filled out the necessary code, you need to follow the below instructions to submit your work: + +**1.** Open `collect_submission.ipynb` in Colab and execute the notebook cells. + +This notebook/script will: + +* Generate a zip file of your code (`.py` and `.ipynb`) called `a1_code_submission.zip`. +* Convert all notebooks into a single PDF file. + +If your submission for this step was successful, you should see the following display message: + +`### Done! Please submit a1_code_submission.zip and a1_inline_submission.pdf to Gradescope. ###` + +**2.** Submit the PDF and the zip file to [Gradescope](https://www.gradescope.com/courses/527613). + +Remember to download `a1_code_submission.zip` and `a1_inline_submission.pdf` locally before submitting to Gradescope. diff --git a/assignments/2023/assignment1_colab.zip b/assignments/2023/assignment1_colab.zip new file mode 100644 index 00000000..4b82db41 Binary files /dev/null and b/assignments/2023/assignment1_colab.zip differ diff --git a/assignments/2023/assignment2.md b/assignments/2023/assignment2.md new file mode 100644 index 00000000..dec78807 --- /dev/null +++ b/assignments/2023/assignment2.md @@ -0,0 +1,94 @@ +--- +layout: page +title: Assignment 2 +mathjax: true +permalink: /assignments2022/assignment2/ +--- + +This assignment is due on **Monday, May 08 2023** at 11:59pm PST. + +Starter code containing Colab notebooks can be [downloaded here]({{site.hw_2_colab}}). + +- [Setup](#setup) +- [Goals](#goals) +- [Q1: Multi-Layer Fully Connected Neural Networks](#q1-multi-layer-fully-connected-neural-networks) +- [Q2: Batch Normalization](#q2-batch-normalization) +- [Q3: Dropout](#q3-dropout) +- [Q4: Convolutional Neural Networks](#q4-convolutional-neural-networks) +- [Q5: PyTorch on CIFAR-10](#q5-pytorch-on-cifar-10) +- [Q6: Network Visualization: Saliency Maps, Class Visualization, and Fooling Images](#q6-network-visualization-saliency-maps-class-visualization-and-fooling-images) +- [Submitting your work](#submitting-your-work) + +### Setup + +Please familiarize yourself with the [recommended workflow]({{site.baseurl}}/setup-instructions/#working-remotely-on-google-colaboratory) before starting the assignment. You should also watch the Colab walkthrough tutorial below. + + + +**Note**. Ensure you are periodically saving your notebook (`File -> Save`) so that you don't lose your progress if you step away from the assignment and the Colab VM disconnects. + +While we don't officially support local development, we've added a requirements.txt file that you can use to setup a virtual env. + +Once you have completed all Colab notebooks **except `collect_submission.ipynb`**, proceed to the [submission instructions](#submitting-your-work). + +### Goals + +In this assignment you will practice writing backpropagation code, and training Neural Networks and Convolutional Neural Networks. The goals of this assignment are as follows: + +- Understand **Neural Networks** and how they are arranged in layered architectures. +- Understand and be able to implement (vectorized) **backpropagation**. +- Implement various **update rules** used to optimize Neural Networks. +- Implement **Batch Normalization** and **Layer Normalization** for training deep networks. +- Implement **Dropout** to regularize networks. +- Understand the architecture of **Convolutional Neural Networks** and get practice with training them. +- Gain experience with a major deep learning framework, such as **TensorFlow** or **PyTorch**. +- Explore various applications of image gradients, including saliency maps, fooling images, class visualizations. + +### Q1: Multi-Layer Fully Connected Neural Networks + +The notebook `FullyConnectedNets.ipynb` will have you implement fully connected +networks of arbitrary depth. To optimize these models you will implement several +popular update rules. + +### Q2: Batch Normalization + +In notebook `BatchNormalization.ipynb` you will implement batch normalization, and use it to train deep fully connected networks. + +### Q3: Dropout + +The notebook `Dropout.ipynb` will help you implement dropout and explore its effects on model generalization. + +### Q4: Convolutional Neural Networks + +In the notebook `ConvolutionalNetworks.ipynb` you will implement several new layers that are commonly used in convolutional networks. + +### Q5: PyTorch on CIFAR-10 + +For this part, you will be working with PyTorch, a popular and powerful deep learning framework. + +Open up `PyTorch.ipynb`. There, you will learn how the framework works, culminating in training a convolutional network of your own design on CIFAR-10 to get the best performance you can. + +### Q6: Network Visualization: Saliency Maps, Class Visualization, and Fooling Images + +The notebook `Network_Visualization.ipynb` will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images. + +### Submitting your work + +**Important**. Please make sure that the submitted notebooks have been run and the cell outputs are visible. + +Once you have completed all notebooks and filled out the necessary code, you need to follow the below instructions to submit your work: + +**1.** Open `collect_submission.ipynb` in Colab and execute the notebook cells. + +This notebook/script will: + +* Generate a zip file of your code (`.py` and `.ipynb`) called `a2_code_submission.zip`. +* Convert all notebooks into a single PDF file. + +If your submission for this step was successful, you should see the following display message: + +`### Done! Please submit a2_code_submission.zip and a2_inline_submission.pdf to Gradescope. ###` + +**2.** Submit the PDF and the zip file to [Gradescope](https://www.gradescope.com/courses/527613). + +Remember to download `a2_code_submission.zip` and `a2_inline_submission.pdf` locally before submitting to Gradescope. diff --git a/assignments/2023/assignment2_colab.zip b/assignments/2023/assignment2_colab.zip new file mode 100644 index 00000000..a0c92a0e Binary files /dev/null and b/assignments/2023/assignment2_colab.zip differ diff --git a/index.html b/index.html index 71b0b486..79297c41 100644 --- a/index.html +++ b/index.html @@ -4,7 +4,9 @@
- These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition. For questions/concerns/bug reports, please submit a pull request directly to our git repo. + These notes accompany the Stanford CS class CS231n: Convolutional Neural + Networks for Visual Recognition. For questions/concerns/bug reports, please submit a pull request directly to + our git repo.
@@ -13,20 +15,21 @@
-
Spring 2022 Assignments
- - - +
Spring 2023 Assignments
+ +
+ (To be released) Assignment #2: Fully Connected and Convolutional Nets, Batch Normalization, Dropout, Pytorch & + Network Visualization + +
+
+ (To be released) Assignment #3: Image Captioning with RNNs and Transformers, Network Visualization, + Generative Adversarial Networks, Self-Supervised Contrastive Learning + +
- - - - -
Module 0: Preparation
+
Module 0: Preparation
- + - - - - - -
Module 1: Neural Networks
+ +
Module 1: Neural Networks
-
- - Image Classification: Data-driven Approach, k-Nearest Neighbor, train/val/test splits - -
- L1/L2 distances, hyperparameter search, cross-validation -
+
+ + Image Classification: Data-driven Approach, k-Nearest Neighbor, train/val/test splits + +
+ L1/L2 distances, hyperparameter search, cross-validation
+
-
- - Linear classification: Support Vector Machine, Softmax - -
- parameteric approach, bias trick, hinge loss, cross-entropy loss, L2 regularization, web demo -
+
+ + Linear classification: Support Vector Machine, Softmax + +
+ parameteric approach, bias trick, hinge loss, cross-entropy loss, L2 regularization, web demo
+
-
- - Optimization: Stochastic Gradient Descent - -
- optimization landscapes, local search, learning rate, analytic/numerical gradient -
+
+ + Optimization: Stochastic Gradient Descent + +
+ optimization landscapes, local search, learning rate, analytic/numerical gradient
+
-
- - Backpropagation, Intuitions - -
- chain rule interpretation, real-valued circuits, patterns in gradient flow -
+
+ + Backpropagation, Intuitions + +
+ chain rule interpretation, real-valued circuits, patterns in gradient flow
+
-
- - Neural Networks Part 1: Setting up the Architecture - -
- model of a biological neuron, activation functions, neural net architecture, representational power -
+
+ + Neural Networks Part 1: Setting up the Architecture + +
+ model of a biological neuron, activation functions, neural net architecture, representational power
+
-
- - Neural Networks Part 2: Setting up the Data and the Loss - -
- preprocessing, weight initialization, batch normalization, regularization (L2/dropout), loss functions -
+
+ + Neural Networks Part 2: Setting up the Data and the Loss + +
+ preprocessing, weight initialization, batch normalization, regularization (L2/dropout), loss functions
+
-
- - Neural Networks Part 3: Learning and Evaluation - -
- gradient checks, sanity checks, babysitting the learning process, momentum (+nesterov), second-order methods, Adagrad/RMSprop, hyperparameter optimization, model ensembles -
+
+ + Neural Networks Part 3: Learning and Evaluation + +
+ gradient checks, sanity checks, babysitting the learning process, momentum (+nesterov), second-order methods, + Adagrad/RMSprop, hyperparameter optimization, model ensembles
+
-
- - Putting it together: Minimal Neural Network Case Study - -
- minimal 2D toy data example -
+ -
Module 2: Convolutional Neural Networks
+
Module 2: Convolutional Neural Networks
-
- - Convolutional Neural Networks: Architectures, Convolution / Pooling Layers - -
- layers, spatial arrangement, layer patterns, layer sizing patterns, AlexNet/ZFNet/VGGNet case studies, computational considerations -
+
+ + Convolutional Neural Networks: Architectures, Convolution / Pooling Layers + +
+ layers, spatial arrangement, layer patterns, layer sizing patterns, AlexNet/ZFNet/VGGNet case studies, + computational considerations
+
-
- - Understanding and Visualizing Convolutional Neural Networks - -
- tSNE embeddings, deconvnets, data gradients, fooling ConvNets, human comparisons -
-
- - +
\ No newline at end of file