Use auto encoder feature extraction to facilitate classification model prediction accuracy using gradient boosting models
-
Updated
Apr 14, 2023 - Jupyter Notebook
Use auto encoder feature extraction to facilitate classification model prediction accuracy using gradient boosting models
Evaluation and Implementation of various Machine Learning models for creating a "Banking/Financial Transaction Fraud Prevention System"
Model that uses 10 different algorithms to predict the revenue of a movie before it's release
An NHL expected goals (xG) model built with light gradient boosting.
multi-variate deep time series forecasting ensemble models
Project work related to various hackathons
Comparison of ensemble learning methods on diabetes disease classification with various datasets
A model build on RAVDESS dataset, for speech emotion recognition. 85.59% validation accuracy
How to do a simple end-to-end machine learning classification project using the telco churn dataset
This repository contains the project where the goal is to develop a machine learning model that can accurately predict car prices based on various features. We explored multiple models including K-Nearest Neighbor, Decision Tree, Catboost Classifier, and Light Gradient Boosting Classifier.
A binary classification task performed with machine learning in Python. The dataset's target distribution is heavily imbalanced. The model performance was evaluated with F1 scores.
Forest Cover Type Classifier predicts forest cover types using UCI Covertype data. Built with Python, pandas, scikit-learn, XGBoost, and LightGBM. Evaluates KNN, Decision Tree, Gradient Boosting, Random Forest, and XGBoost, with Random Forest achieving ~96% accuracy.
Spaceship Titanic project.
Add a description, image, and links to the light-gradient-boosting-machine topic page so that developers can more easily learn about it.
To associate your repository with the light-gradient-boosting-machine topic, visit your repo's landing page and select "manage topics."