Concurrency=1 requires the most amount of time for any hardware not specialized for single concurrency use.
We would like to keep the concurrency=1 (or lower concurrency in general for instance between 1 and 8) supported via the system, but that may require us to tweak the constraints.
One proposal is to use a sampled (similarly distributed to the original dataset) subset of the original dataset for lower concurrency runs. This would require comparisons to be only made between the lower concurrency points on the pareto.
Alternatively, we can limit the execution time, but that would mean that random subset of the dataset get processed within the time limit, which makes comparison unfair - even between lower concurrency submissions due to different random subsets.
Concurrency=1 requires the most amount of time for any hardware not specialized for single concurrency use.
We would like to keep the concurrency=1 (or lower concurrency in general for instance between 1 and 8) supported via the system, but that may require us to tweak the constraints.
One proposal is to use a sampled (similarly distributed to the original dataset) subset of the original dataset for lower concurrency runs. This would require comparisons to be only made between the lower concurrency points on the pareto.
Alternatively, we can limit the execution time, but that would mean that random subset of the dataset get processed within the time limit, which makes comparison unfair - even between lower concurrency submissions due to different random subsets.