To understand fast growing technologies in machine learning, we need to understand the fundamentals first. Hence, I reimplemented machine learning fundamental algorithms from scratch using Numpy with simple examples on the Jupyter notebook. Although essential mathematical equations needed to implement the algorithms are included on the notebooks, some of the detailed concepts are simplified or skipped not to focus on math itself too much. The core of the algorithms are written as Python scripts, and they are imported and used for test on the notebooks.
$ conda create -n my_env python=3.10
$ conda activate my_env
$ pip install -r requirements.txt
Discriminative Models
- K-Nearest Neighbors
- Linear Regression
- Logistic Regression
- Support Vector Machine
- Neural Network
- Gradient Descent
- Regularization
- Optimizers
- Batch Normalization
- Weight Initialization
- Activation functions
- Decision Trees
- Random Forest
- Bagging
- Boosting
- Pasting
Generative Models
- Naive Bayes
- Gaussian Discriminant Analysis (GDA)
Others
- K-Means Clustering
- Dimensionality Reduction
- Principal Component Analysis
- Locally Linear Embedding
I mainly referred to CS230(2018), CS229(2018), and other great articles on Medium. Each notebook has its corresponding references written on the bottom.