Skip to content

Memory and Machine Learning Course with PyTorch tutorial, for Master 1 Science Cognitive and Applications, University of Lorraine

Notifications You must be signed in to change notification settings

lethienhoa/ML-Course

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 

Repository files navigation

Lecturer

Hoa T. Le

Contact me at <first_name>.<last_name>@loria.fr or at my offfice B213 (Loria) (please make an appointment first).

Final project

Link. Note: extend deadline to midnight 27/4

Overview

The aim of this course is to introduce computational, numerical and distributed memories from a theoretical and epistemological standpoint as well as neural networks and their use in cognitive science. Concerning machine learning, the course will focus on various model learners such as Markov Chains, Reinforcement Learning and Neural Networks.

Target audience

This course is for Master 1 Science Cognitive and Applications (University of Lorraine). This is an introduction course, assuming no prior knowledge of Machine Learning.

Course Organization

  • 30 hours = 10 work sessions of 3 hours/week
  • Courses = half lectures / half exercises or practicals
  • Evaluation: individual project
  • The last 2 (maybe 3) work sessions will be saved to work on the project

20% projects

You can choose one of these books, read (entirely or at least 5 chapters) and write a resume in one page.

John Tabak´s series:

  • Probability and Statistics: The Science of Uncertainty (History of Mathematics)
  • Algebra: Sets, Symbols, and the Language of Thought (History of Mathematics)
  • Geometry: The Language of Space and Form (History of Mathematics)
  • Beyond Geometry: A New Mathematics of Space and Form (History of Mathematics)
  • Numbers: Computers, Philosophers, and the Search for Meaning (History of Mathematics)
  • Mathematics and the Laws of Nature: Developing the Language of Science (History of Mathematics)

Michael Guillen:

  • Five Equations That Changed the World: The Power and Poetry of Mathematics

Ian Stewart:

  • In Pursuit of the Unknown: 17 Equations That Changed the World

Book References

  • Reinforcement Learning: An Introduction. Richard S. Sutton and Andrew G. Barto (1998).
  • Numerical Optimization. Jorge Nocedal and Stephen J. Wright (1999).
  • The Elements of Statistical Learning. H. Friedman, Robert Tibshirani and Trevor Hastie (2001).
  • Inference in Hidden Markov Models. Olivier Cappé, Eric Moulines and Tobias Rydén (2005).
  • Pattern Recognition and Machine Learning. Christopher M. Bishop (2006).
  • Deep Learning. Ian Goodfellow, Yoshua Bengio and Aaron Courville (2016).

Syllabus

Lecture 1. Introduction about Artificial Intelligence (Slides)

Reading

More Reading

Practical: Learning basic PyTorch (open tutorial)

  • What is PyTorch ?
  • Initialization and matrix computation
  • Conversion between PyTorch <-> Numpy
  • Autograd: automatic differentiation package

Installation instructions:

Lecture 2. Baseline models and Loss functions (Slides)

  • A classification’s pipeline
  • K-Nearest Neighbors (KNN)
  • Linear Classifier
  • Loss function
  • Regularization

Reading

Practical: Training an Image Classifier on CIFAR10 data from scratch (TP 1)

  • Define the network
  • Loss function
  • Backprop
  • Update the weights

Prerequisite: Linear Algebra

Lecture 3-4. Optimization (Slides) (Revision)

  • Linear Least Squares Optimization
    • Cholesky decomposition
    • QR decomposition
  • Iterative methods
    • Steepest gradient descent
    • Momentum, Nesterov
    • Adaptive learning rates (Adagrad, Adadelta, Adam)

Reading

Practical: Neural Networks for Text (TP 2)

  • Text Classification with Logistic Regression on BOW Sentence representation
  • Text Classification with Word Embeddings
  • N-Gram Language Modeling and Continuous BOW

Prerequisite:

Lecture 5. Neural Network (Slides)

  • Feed Forward Neural Network
  • Backpropagation
  • Recurrent Neural Network

Reading

Practical: (TP 3)

More Reading

Lecture 6. Long-Short Term Memory Networks (Slides)

  • Vanishing gradient problem of RNN
  • Training recurrent networks (activation functions, gradient clipping, initialization,...)
  • LSTM (Stacked LSTMs, BiLSTM)
  • Sequence-to-Sequence model for Machine Translation

Reading

Practical: (TP 4)

  • Translation with a Sequence to Sequence Network and Attention (from scratch)

More Reading

Lecture 7. Training Neural Networks (Slides)

  • Activation functions
  • Data preprocessing
  • Weight initialization
  • Batch normalization
  • Regularization: Dropout

Reading

Practical:

Lecture 8. Autoencoders (Slides) (Final project link above)

  • Undercomplete Autoencoders
  • Denoising Autoencoder (DAE)
  • Variational Autoencoder (VAE)
    • Information Theory
    • Shannon Entropy
    • Kullback-Leibler Divergence (Relative Entropy)
    • Approximate Inference
    • Variational Inference

Reading

More Reading

  • (CNN-DCNN) Autoencoder (AE): Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, Lawrence Carin. Deconvolutional Paragraph Representation Learning. NIPS 2017
  • (Sequential) Denoising Autoencoder (DAE): Felix Hill, Kyunghyun Cho, Anna Korhonen. Learning Distributed Representations of Sentences from Unlabelled Data. NAACL-HLT 2016
  • Variational Autoencoder (VAE): Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, Samy Bengio. Generating Sentences from a Continuous Space. CoNLL 2016
  • Adversarial Autoencoder (AAE): Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow. Adversarial Autoencoders. ICLR 2016

Lecture 9. Reinforcement Learning (Slides)

  • Reinforcement Learning problem
  • Inside an RL agent
    • Policy
    • Value function
    • Model
  • Markov Decision Process
    • Markov Process
    • Markov Reward Process
      • Value function
      • Bellman equation for MRP

Reading

Lecture 10. Solving Reinforcement Learning problems (1) (Slides)

  • Markov Decision Process
    • Policy
    • Action-value function (Q-function)
    • Bellman equation for MDP
    • Optimal Value function
    • Optimal Policy
  • Dynamic Programming (Model-based)
    • Policy Evaluation - Policy Iteration
    • Value Iteration

Reading

Lecture 11. Solving Reinforcement Learning problems (2)

  • Model-free
    • Prediction
      • Monte-Carlo Learning
    • Control
      • On-policy Monte-Carlo Control
      • Off-policy Learning (Q-learning)
  • Value function approximation (ex: Atari games)

Reading

More Reading

About

Memory and Machine Learning Course with PyTorch tutorial, for Master 1 Science Cognitive and Applications, University of Lorraine

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published