Skip to content

HunJer93/test-neural-network

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The purpose of this project is to learn how to build neural and convolutional networks using libraries like

  1. Tensorflow: https://www.tensorflow.org/
  2. scikit-learn: https://scikit-learn.org/stable/index.html
  3. Keras: https://keras.io/

The project is following along Deep Learning A-Z 2024: Neural Networks, AI & ChatGPT: https://www.udemy.com/course/deeplearning/

The parts of the project consist of:

Part 1: Artificial Neural Networks

Readings: A Neural Network in 13 lines of Python: https://iamtrask.github.io/2015/07/27/python-network-part2/ How Backpropagation algorithms work: http://neuralnetworksanddeeplearning.com/chap2.html

Part 2: Convolutional Neural Networks

Readings: Introduction to Convolutional Neural Networks: https://cs.nju.edu.cn/wujx/paper/CNN.pdf Understanding Convolutional Neural Networks with A Mathamatical Model: https://arxiv.org/pdf/1609.04112 Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification: https://arxiv.org/pdf/1502.01852 Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition: https://ais.uni-bonn.de/papers/icann2010_maxpool.pdf ** 9 Deep Learning Papers You Need To Know: https://adeshpande3.github.io/ A friendly introduction to cross-entropy loss: https://rdipietro.github.io/friendly-intro-to-cross-entropy-loss/ Softmax Classification with Cross Entropy: https://peterroelants.github.io/posts/cross-entropy-softmax/

Part 3: Recurrent Neural Networks

Readings: The Unreasonable Effectivness of Recurrent Neural Networks: https://karpathy.github.io/2015/05/21/rnn-effectiveness/ Visualizing and Understanding Recurrent Networks: https://arxiv.org/pdf/1506.02078 LSTM: A Search Space Odyssey: https://arxiv.org/pdf/1503.04069

Part 4: Self Organizing Maps

Readings: Kohonen's Self Organizing Feature Maps: http://www.ai-junkie.com/ann/som/som1.html

Part 5: Boltzmann Machines

Readings: Machine Learning Research Group: https://www.robots.ox.ac.uk/~parg/ University of Toronto Department of Statistical Sciences: https://www.statistics.utoronto.ca/

Part 6: AutoEncoders

Readings: Building Autoencoders in Keras: https://blog.keras.io/building-autoencoders-in-keras.html Deep learning, sparse autoencoders: https://mccormickml.com/ k-Sparse Autoencoders: https://arxiv.org/pdf/1312.5663 Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Netowrk with a Local Denoising Criterion

Data sets come from https://www.superdatascience.com/deep-learning

About

practice building a neural network from Deep Learning A-Z course

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages