Skip to content

[Feature request] Arbitrary base learner #3180

Closed

Description

Summary

Its pretty cool that I can define my own loss function and gradient for LightGBM, and then use the linear, tree, or dart base learners to optimize my loss function.

It'd be really cool if I could specify my own base learner, perhaps in the form of an sklearn class with a fit method, a predict method, and support for sample weights.

It'd really open up a whole new world of possibilities to be able to use the LightGBM algorithm to fit a wider range of possible base learners.

Motivation

Custom objectives / custom loss functions are really useful. But I want to take it one step further, and also customize the base learner used by LightGBM.

Description

Xgboost supports tree-based base-learners, as well as linear base learners. As far as I can tell LightGBM only supports tree-based base learners.

It'd be really cool to be able to use linear base learners with LightGBM.

It would be even cooler if I could specify my own base learners, and use LightGBM as a platform for doing my own research into different forms of boosting.

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions