Description
Discussed in https://github.com/orgs/PySlurm/discussions/314
Originally posted by robgics August 25, 2023
I'm using pyslurm.Nodes.load() to get a list of all nodes...that works fine. And I can print out a lot of things from them. But I cannot yet seem to get an accurate measure of used gres (gpus). I started 3 jobs each requesting 2 gpus each. I can see from squeue that the jobs are running, and if I use "scontrol show node" on the node that a job started on, I can see in the AllocTRES output that I am using the 2 gpus. I can also use "scontrol --details show node < nodename>" and that will give me GresUsed.
However, when I output all of the nodes from pyslurm, specifically the node.allocated_gres, it shows an empty dict. I note that in the code itself, allocated_gres references "self.info.gres_used", so if that's the same gres_used that is in the scontrol output, then something is wrong with how pyslurm gets that value.
I also notice that tres_configured and tres_alloc are commented out in the Node class def.
Thanks for the help.