Skip to content

[Core] Remote placement using gpu memory #26929

Open
@fostiropoulos

Description

@fostiropoulos

Description

When running ray on machines with different type of GPU accelerators the fractional GPU placement strategy is not suitable. Instead allow for specifying mb of gpu for example.

Additionally the code is not accelerator agnostic and requires writing boiler plate code to determine the fractional gpu to be used even if all accelerators are the same for a given machine.

@ray.remote(gpu_mem=20mb)
def some_fn():
    return True

Use case

This applies for both making the ray remote code work more portable as well as improving GPU utilization for various applications when there is inconsistent GPU types across a cluster.

Metadata

Metadata

Labels

P2Important issue, but not time-criticalcoreIssues that should be addressed in Ray Corecore-schedulerenhancementRequest for new feature and/or capability

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions