This repository contains a Jupyter notebook that implements Linear Regression using Gradient Descent from scratch. The notebook also includes a comparison of the results with the scikit-learn implementations of Linear, Lasso, and Ridge Regression by plotting graphs.
- If you are unable to render the notebook on Github, you can use the below link Notebook Link
- Introduction
- Gradient Descent
- Linear Regression
- Comparison with scikit-learn
- Lasso Regression
- Ridge Regression
- Contributing
Linear Regression is a popular statistical model used to establish a relationship between a dependent variable and one or more independent variables. It assumes a linear relationship between the input variables and the output variable.
Gradient Descent is an optimization algorithm used to minimize the cost function in machine learning models. In the context of Linear Regression, it iteratively updates the parameters to minimize the difference between the predicted values and the actual values.
The notebook provides an implementation of Linear Regression using Gradient Descent from scratch. It demonstrates how to train the model, make predictions, and evaluate the performance using metrics such as mean squared error.
The notebook compares the results of the custom implementation with the scikit-learn implementations of Linear, Lasso, and Ridge Regression. It plots graphs to visualize and compare the performance of the different models.
Lasso Regression is a regularization technique used to prevent overfitting in Linear Regression models. It adds a penalty term to the cost function, which encourages the model to use fewer features.
Ridge Regression is another regularization technique used to prevent overfitting. It adds a penalty term to the cost function, which forces the model to have smaller weights.
Contributions are welcome! If you find any issues or have suggestions for improvement, please feel free to open an issue or submit a pull request.