Skip to content

PolunLin/A-deep-understanding-of-deep-learning

Repository files navigation

A-deep-understanding-of-deep-learning

Chapter

| generated by https://github.com/PolunLin/Udemy-copy-chapter

0. Introduction 2 堂講座 • 17 分鐘
0.0. Using Udemy like a pro 09:25
1. Download all course materials 2 堂講座 • 8 分鐘
1.0. Downloading and using the code 06:29
1.1. My policy on code-sharing 01:38
2. Concepts in deep learning 5 堂講座 • 1 小時 16 分鐘
2.0. What is an artificial neural network? 16:02
2.1. How models "learn" 12:26
2.2. The role of DL in science and knowledge 16:43
2.3. Are artificial "neurons" like biological neurons? 13:03
3. About the Python tutorial 1 堂講座 • 4 分鐘
3.0. Should you watch the Python tutorial? 04:25
4. Math, numpy, PyTorch 19 堂講座 • 3 小時 21 分鐘
4.0. PyTorch or TensorFlow? 00:44
4.1. Introduction to this section 02:06
4.2. Spectral theories in mathematics 09:16
4.3. Terms and datatypes in math and computers 07:05
4.4. Converting reality to numbers 06:33
4.5. Vector and matrix transpose 06:58
4.6. OMG it's the dot product! 09:45
4.7. Matrix multiplication 15:27
4.8. Softmax 19:26
4.9. Logarithms 08:26
4.10. Entropy and cross-entropy 18:18
4.11. Min/max and argmin/argmax 12:47
4.12. Mean and variance 15:34
4.13. Random sampling and sampling variability 11:18
4.14. Reproducible randomness via seeding 08:37
4.15. The t-test 13:57
4.16. Derivatives: intuition and polynomials 16:39
4.17. Derivatives find minima 08:32
4.18. Derivatives: product and chain rules 10:00
5. Gradient descent 10 堂講座 • 1 小時 57 分鐘
5.0. Overview of gradient descent 14:15
5.1. What about local minima? 11:56
5.2. Gradient descent in 1D 17:11
5.3. CodeChallenge: unfortunate starting value 11:30
5.4. CodeChallenge: 2D gradient ascent 14:48
5.5. Parametric experiments on g.d. 05:16
5.6. CodeChallenge: fixed vs. dynamic learning rate 18:56
5.7. Vanishing and exploding gradients 15:33
5.8. Tangent: Notebook revision history 06:04
6. ANNs (Artificial Neural Networks) 21 堂講座 • 5 小時 11 分鐘
6.0. The perceptron and ANN architecture 19:50
6.1. A geometric view of ANNs 13:38
6.2. ANN math part 1 (forward prop) 16:22
6.3. ANN math part 2 (errors, loss, cost) 10:54
6.4. ANN math part 3 (backprop) 12:10
6.5. ANN for regression 24:09
6.6. CodeChallenge: manipulate regression slopes 18:58
6.7. ANN for classifying qwerties 22:23
6.8. Multilayer ANN 23:46
6.9. Linear solutions to linear problems 19:51
6.10. Why multilayer linear models don't exist 08:14
6.11. Multi-output ANN (iris dataset) 06:20
6.12. CodeChallenge: more qwerties! 25:54
6.13. Comparing the number of hidden units 11:56
6.14. Depth vs. breadth: number of parameters 09:59
6.15. Defining models using sequential vs. class 17:25
6.16. Model depth vs. breadth 13:17
6.17. CodeChallenge: convert sequential to class 20:31
6.18. Diversity of ANN visual representations 06:37
6.19. Reflection: Are DL models understandable yet? 00:18
7. Overfitting and cross-validation 8 堂講座 • 1 小時 48 分鐘
7.0. What is overfitting and is it as bad as they say? 12:28
7.1. Cross-validation 17:13
7.2. Generalization 06:09
7.3. Cross-validation -- manual separation 12:39
7.4. Cross-validation -- scikitlearn 21:01
7.5. Cross-validation -- DataLoader 20:27
7.6. Splitting data into train, devset, test 09:45
7.7. Cross-validation on regression 08:09
8. Regularization 12 堂講座 • 2 小時 38 分鐘
8.0. Regularization: Concept and methods 13:38
8.1. train() and eval() modes 07:14
8.2. Dropout regularization 21:56
8.3. Dropout regularization in practice 23:13
8.4. Dropout example 2 06:33
8.5. Weight regularization (L1/L2): math 18:25
8.6. L2 regularization in practice 13:24
8.7. L1 regularization in practice 12:22
8.8. Training in mini-batches 11:32
8.9. Batch training in action 10:47
8.10. The importance of equal batch sizes 06:59
8.11. CodeChallenge: Effects of mini-batch size 11:57
9. Metaparameters (activations, optimizers) 24 堂講座 • 4 小時 52 分鐘
9.0. What are "metaparameters"? 05:02
9.1. The "wine quality" dataset 17:29
9.2. CodeChallenge: Minibatch size in the wine dataset 15:38
9.3. Data normalization 13:12
9.4. The importance of data normalization 09:33
9.5. Batch normalization 13:16
9.6. Batch normalization in practice 07:38
9.7. CodeChallenge: Batch-normalize the qwerties 05:06
9.8. Activation functions 17:59
9.9. Activation functions in PyTorch 12:12
9.10. Activation functions comparison 09:27
9.11. CodeChallenge: Predict sugar 07:48
9.12. Loss functions 17:06
9.13. Loss functions in PyTorch 16:50
9.14. More practice with multioutput ANNs 18:41
9.15. Optimizers (minibatch, momentum) 14:05
9.16. SGD with momentum 18:41
9.17. Optimizers (RMSprop, Adam) 07:46
9.18. Optimizers comparison 15:40
9.19. CodeChallenge: Optimizers and... something 10:17
9.20. CodeChallenge: Adam with L2 regularization 06:57
9.21. Learning rate decay 07:42
9.22. How to pick the right metaparameters 12:15
10. FFNs (Feed-Forward Networks) 12 堂講座 • 2 小時 15 分鐘
10.0. What are fully-connected and feedforward networks? 04:57
10.1. The MNIST dataset 12:33
10.2. FFN to classify digits 22:20
10.3. CodeChallenge: Binarized MNIST images 05:24
10.4. CodeChallenge: Data normalization 16:16
10.5. Distributions of weights pre- and post-learning 14:48
10.6. CodeChallenge: MNIST and breadth vs. depth 12:35
10.7. CodeChallenge: Optimizers and MNIST 07:06
10.8. Shifted MNIST 08:00
10.9. CodeChallenge: The mystery of the missing 7 11:25
10.10. Universal approximation theorem 10:47
11. More on data 11 堂講座 • 2 小時 25 分鐘
11.0. Anatomy of a torch dataset and dataloader 17:57
11.1. Data size and network size 16:35
11.2. CodeChallenge: unbalanced data 20:05
11.3. What to do about unbalanced designs? 07:45
11.4. Data oversampling in MNIST 16:30
11.5. Data noise augmentation (with devset+test) 13:16
11.6. Data feature augmentation 19:40
11.7. Getting data into colab 06:05
11.8. Save and load trained models 06:14
11.9. Save the best-performing model 15:18
11.10. Where to find online datasets 05:32
12. Measuring model performance 8 堂講座 • 1 小時 20 分鐘
12.0. Two perspectives of the world 07:01
12.1. Accuracy, precision, recall, F1 12:39
12.2. APRF in code 06:42
12.3. APRF example 1: wine quality 13:34
12.4. APRF example 2: MNIST 12:01
12.5. CodeChallenge: MNIST with unequal groups 09:14
12.6. Computation time 09:55
12.7. Better performance in test than train? 08:35
13. FFN milestone projects 6 堂講座 • 1 小時 2 分鐘
13.0. Project 1: A gratuitously complex adding machine 07:05
13.1. Project 1: My solution 11:18
13.2. Project 2: Predicting heart disease 07:14
13.3. Project 3: FFN for missing data interpolation 18:21
13.4. Project 3: My solution 09:35
14. Weight inits and investigations 10 堂講座 • 2 小時 19 分鐘
14.0. Explanation of weight matrix sizes 11:54
14.1. A surprising demo of weight initializations 15:52
14.2. Theory: Why and how to initialize weights 12:46
14.3. CodeChallenge: Weight variance inits 13:14
14.4. Xavier and Kaiming initializations 15:42
14.5. CodeChallenge: Xavier vs. Kaiming 16:54
14.6. CodeChallenge: Identically random weights 12:40
14.7. Freezing weights during learning 12:58
14.8. Learning-related changes in weights 21:55
14.9. Use default inits or apply your own? 04:36
15. Autoencoders 6 堂講座 • 1 小時 51 分鐘
15.0. What are autoencoders and what do they do? 11:42
15.1. Denoising MNIST 15:48
15.2. CodeChallenge: How many units? 19:52
15.3. The latent code of MNIST 17:55
15.4. Autoencoder with tied weights 21:57
16. Running models on a GPU 3 堂講座 • 32 分鐘
16.0. What is a GPU and why use it? 15:07
16.1. Implementation 10:13
16.2. CodeChallenge: Run an experiment on the GPU 06:46
17. Convolution and transformations 12 堂講座 • 2 小時 56 分鐘
17.0. Convolution: concepts 21:33
17.1. Feature maps and convolution kernels 09:32
17.2. Convolution in code 21:05
17.3. Convolution parameters (stride, padding) 12:14
17.4. The Conv2 class in PyTorch 13:23
17.5. CodeChallenge: Choose the parameters 07:10
17.6. Transpose convolution 13:41
17.7. Max/mean pooling 18:35
17.8. Pooling in PyTorch 13:29
17.9. To pool or to stride? 09:35
17.10. Image transforms 16:47
17.11. Creating and using custom DataLoaders 19:06
18. Understand and design CNNs 16 堂講座 • 4 小時 10 分鐘
18.0. The canonical CNN architecture 10:47
18.1. CNN to classify MNIST digits 26:06
18.2. CNN on shifted MNIST 08:36
18.3. Classify Gaussian blurs 24:10
18.4. Examine feature map activations 27:50
18.5. CodeChallenge: Softcode internal parameters 16:48
18.6. CodeChallenge: How wide the FC? 11:25
18.7. Do autoencoders clean Gaussians? 17:10
18.8. CodeChallenge: AEs and occluded Gaussians 09:36
18.9. CodeChallenge: Custom loss functions 20:15
18.10. The EMNIST dataset (letter recognition) 16:59
18.11. Dropout in CNNs 24:59
18.12. CodeChallenge: How low can you go? 10:14
18.13. CodeChallenge: Varying number of channels 06:45
18.14. So many possibilities! How to create a CNN? 13:39
19. CNN milestone projects 5 堂講座 • 40 分鐘
19.0. Project 1: Import and classify CIFAR10 07:15
19.1. Project 1: My solution 12:01
19.2. Project 2: CIFAR-autoencoder 04:51
19.3. Project 3: FMNIST 03:52
20. Transfer learning 8 堂講座 • 1 小時 46 分鐘
20.0. Transfer learning: What, why, and when? 16:52
20.1. Transfer learning: MNIST -> FMNIST 10:06
20.2. CodeChallenge: letters to numbers 14:06
20.3. Famous CNN architectures 06:46
20.4. Transfer learning with ResNet-18 16:43
20.5. Pretraining with autoencoders 03:41
20.6. CIFAR10 with autoencoder-pretrained model 20:01
21. Style transfer 5 堂講座 • 58 分鐘
21.0. What is style transfer and how does it work? 04:36
21.1. The style transfer algorithm 12:37
21.2. Transferring the screaming bathtub 10:58
21.3. CodeChallenge: Style transfer with AlexNet 22:16
22. Generative adversarial networks 7 堂講座 • 1 小時 25 分鐘
22.0. GAN: What, why, and how 17:22
22.1. Linear GAN with MNIST 21:55
22.2. CodeChallenge: Linear GAN with FMNIST 09:50
22.3. CNN GAN with Gaussians 15:06
22.4. CodeChallenge: Gaussians with fewer layers 06:05
22.5. CNN GAN with FMNIST 06:24
22.6. CodeChallenge: CNN GAN with CIFAR 07:51
23. RNNs (Recurrent Neural Networks) (and GRU/LSTM) 9 堂講座 • 2 小時 48 分鐘
23.0. Leveraging sequences in deep learning 12:53
23.1. How RNNs work 15:14
23.2. The RNN class in PyTorch 17:44
23.3. Predicting alternating sequences 19:30
23.4. CodeChallenge: sine wave extrapolation 24:49
23.5. GRU and LSTM 15:51
23.6. The LSTM and GRU classes 23:08
23.7. Lorem ipsum 13:26
24. Ethics of deep learning 5 堂講座 • 49 分鐘
24.0. Will AI save us or destroy us? 09:40
24.1. Example case studies 06:39
24.2. Some other possible ethical scenarios 10:35
24.3. Will deep learning take our jobs? 10:27
24.4. Accountability and making ethical AI 11:22
25. Where to go from here? 2 堂講座 • 24 分鐘
25.0. How to learn topic _X_ in deep learning? 08:08
25.1. How to read academic DL papers 16:00
26. Python intro: Data types 8 堂講座 • 1 小時 41 分鐘
26.0. How to learn from the Python tutorial 03:25
26.1. Variables 18:14
26.2. Math and printing 18:31
26.3. Lists (1 of 2) 13:31
26.4. Lists (2 of 2) 09:29
26.5. Tuples 07:40
26.6. Booleans 18:19
26.7. Dictionaries 11:51
27. Python intro: Indexing, slicing 2 堂講座 • 24 分鐘
27.0. Indexing 12:30
27.1. Slicing 11:45
28. Python intro: Functions 8 堂講座 • 1 小時 41 分鐘
28.0. Inputs and outputs 07:01
28.1. Python libraries (numpy) 14:20
28.2. Python libraries (pandas) 13:57
28.3. Getting help on functions 07:36
28.4. Creating functions 20:27
28.5. Global and local variable scopes 13:20
28.6. Copies and referents of variables 05:45
28.7. Classes and object-oriented programming 18:46
29. Python intro: Flow control 10 堂講座 • 2 小時 36 分鐘
29.0. If-else statements 15:03
29.1. If-else statements, part 2 16:58
29.2. For loops 17:37
29.3. Continue 12:11
29.4. Initializing variables 07:24
29.5. Single-line loops (list comprehension) 18:01
29.6. while loops 15:25
29.7. Broadcasting in numpy 19:30
29.8. Function error checking and handling 15:41
30. Python intro: Text and plots 7 堂講座 • 1 小時 42 分鐘
30.0. Printing and string interpolation 17:18
30.1. Plotting dots and lines 12:55
30.2. Subplot geometry 16:10
30.3. Making the graphs look nicer 18:48
30.4. Seaborn 11:08
30.5. Images 17:59
30.6. Export plots in low and high resolution 07:58
31. Bonus section 1 堂講座 • 1 分鐘
31.0. Bonus content 00:53

===

Future work

===

  1. add number in folder name
  2. add number in file name
  3. add md file in each folder

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published