Description
The current AutoML is not good to use when you have trained a lot of models at the same time in the cloud.
If you have some amount of pods in your Kubernetes cluster it doesn't matter how many AutoML experiments you execute at the same time. 4 experiments or 1 experiment at the same time use 100% of CPU (It brokes health checks and so on)
Low-level API of trainers (like FastForestBinaryTrainer) has options like NumberOfThreads and some other trainer-specific options you can use to handle the workload.
Is it able to add something like this to AutoML API?
But the most brilliant solution is some sort of smart property like ResourceUsingRatio which can be from 0.0 to 1.0
ResourceUsingRatio = 1.0 means the experiment use the maximum of potential resources (mainly CPU) it needs or the machine has.