- Simple Linear Regression
- Gradient Descent over simple linear regression
- Effect of different values for learning rate
- Multiple Linear Regression
- Implementation of gradient descent for Multiple Linear regression using NUMPY
- Test of our implemntation in 'insurance.csv' dataset
- The probabilistic approach to linear regression.Maximum likelihood estimation
- Polynomial Regression, Bias and Variance
- Lasso Regression (L1 Regularization)
- Lasso as feature selection
- Ridge regression (L2 regularization)
- K-fold cross validation
- References
- Log-odds or Loggit function
- The math origin of the Sigmoid function
- Properties and Identities Of Sigmoid Function
- Maximum Likelihood of Logistic regression, Cross-entropy Loss
- Mathematical derivation of cross-entopy loss.Gradient Descent
- Implementation of BinaryLogisticRegression using numpy
- Reguralization of Logistic Regression
- References
- Abstract
- Softmaxt definition and how it works?
- Optimizaton of Softmax Loss with Gradient Descent (Deep math calculation)
- Implementation of Softmax using numpy
- Regularization of softmax by learning rate and max iterations
- Conclusion