Skip to content

Ability to limit AutoML resource using (amount of parallel threads)  #6061

Open
@80LevelElf

Description

@80LevelElf

The current AutoML is not good to use when you have trained a lot of models at the same time in the cloud.
If you have some amount of pods in your Kubernetes cluster it doesn't matter how many AutoML experiments you execute at the same time. 4 experiments or 1 experiment at the same time use 100% of CPU (It brokes health checks and so on)

Low-level API of trainers (like FastForestBinaryTrainer) has options like NumberOfThreads and some other trainer-specific options you can use to handle the workload.

Is it able to add something like this to AutoML API?

But the most brilliant solution is some sort of smart property like ResourceUsingRatio which can be from 0.0 to 1.0
ResourceUsingRatio = 1.0 means the experiment use the maximum of potential resources (mainly CPU) it needs or the machine has.

Metadata

Metadata

Assignees

No one assigned

    Labels

    AutoML.NETAutomating various steps of the machine learning processenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions