Skip to content

[FEA] Ability to train larger datasets from managed memory (atleast for RF) #3538

Open
@teju85

Description

@teju85

Is your feature request related to a problem? Please describe.
If the dataset is large enough not to fit in the GPU memory, we should be able to provide an option for the users to train RF model by keeping the dataset entirely on managed-memory.

This can be a more generic request for all algos in cuML. However, to begin with, we could limit ourselves to RF algo.

Additional context
We should certainly study performance implications of this and document those, if any observed.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions