Open
Description
Description
When running ray on machines with different type of GPU accelerators the fractional GPU placement strategy is not suitable. Instead allow for specifying mb of gpu for example.
Additionally the code is not accelerator agnostic and requires writing boiler plate code to determine the fractional gpu to be used even if all accelerators are the same for a given machine.
@ray.remote(gpu_mem=20mb)
def some_fn():
return True
Use case
This applies for both making the ray remote code work more portable as well as improving GPU utilization for various applications when there is inconsistent GPU types across a cluster.