The basic rationale behind these notebooks is to demonstrate how deep learning concepts might be implemented in a library.
The only libraries used for calculations are numpy and cupy.
neural-networks.ipynb: Contains pure numpy implementations of basic neural network concepts such as:- Activation functions
- Loss functions
- Fully-connected layers (forward and backward passes)
- Convolution/Pooling layers (forward and backward passes)
Note: Some notebooks have been GPU-accelerated with the use of CuPy in order to perform model training faster.
Due: 18th May 2022
- Move concepts and experiments code into separate notebooks.
- Simplify functions and reduce OOP language.
- Upload code for experiments 1, 2, and 3.
- Implement code for:
- Regularization
- Batch normalization
- Drop-out
- Refactor code for convolution.
Due: 25th May 2022
- Implement code for:
- Class activation map
- Word embeddings
- Bag of Words
- Neural Probabilistic Model
- Word2vec
- Continuous Skip-Gram
- Language models
- Recurrent Neural Networks
- Long Short-Term Memory
- Gated Recurrent Unit
- Self-attention
- Transformer
Due: 1st June 2022
- Implement code for:
- Graph embeddings
- Node2vec
- [...]
- Graph convolution networks
- Graph filters
- Graph pooling
- Graph models
- Graph embeddings