Description
openedon Jun 22, 2020
Summary
Its pretty cool that I can define my own loss function and gradient for LightGBM, and then use the linear, tree, or dart base learners to optimize my loss function.
It'd be really cool if I could specify my own base learner, perhaps in the form of an sklearn class with a fit method, a predict method, and support for sample weights.
It'd really open up a whole new world of possibilities to be able to use the LightGBM algorithm to fit a wider range of possible base learners.
Motivation
Custom objectives / custom loss functions are really useful. But I want to take it one step further, and also customize the base learner used by LightGBM.
Description
Xgboost supports tree-based base-learners, as well as linear base learners. As far as I can tell LightGBM only supports tree-based base learners.
It'd be really cool to be able to use linear base learners with LightGBM.
It would be even cooler if I could specify my own base learners, and use LightGBM as a platform for doing my own research into different forms of boosting.
References
- sklearn.ensemble.AdaBoostClassifier allows you to specify an arbitrary base learner
- I made the same feature request on Xgboost, but it doesn't seem like they're keen to work on it.