Skip to content

Conversation

@sailorvii
Copy link

At allocate state, the function checkNodeGPUSharingPredicateAndScore will add the used mem/core to the structure. For avoiding the pod allocated info updating latency, we use addResource it to add the pod the pod map. But it also add the used mem/core. To address this, we new a function that only add pod the map.

What type of PR is this?

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes # #4809

Special notes for your reviewer:

Does this PR introduce a user-facing change?


@gemini-code-assist
Copy link

Summary of Changes

Hello @sailorvii, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the GPU resource allocation logic to resolve a latency issue encountered during the pod allocation state. Previously, the addResource function would immediately update a GPU's used memory and core when a pod was added, leading to delays in updating the pod's allocated information. To mitigate this, a new function, addToPodMap, has been introduced. This function solely focuses on adding the pod to the GPU device map without modifying resource usage, thereby preventing premature updates and improving the responsiveness of the allocation process.

Highlights

  • New Function for Pod Mapping: A new function, addToPodMap, has been introduced in pkg/scheduler/api/devices/nvidia/vgpu/device_info.go to specifically handle adding pods to the GPU device map.
  • Separation of Resource Updates: The addToPodMap function ensures that a pod is added to the GPU device map without immediately updating its UsedMem and UsedCore fields, separating this action from resource usage tracking.
  • Latency Mitigation: This change addresses a latency issue where the previous addResource function would prematurely update used memory/core during allocation, causing delays in updating pod allocated information.
  • Optimized Allocation Process: The Allocate function now utilizes addToPodMap instead of addResource for initial pod mapping, preventing premature resource updates and improving the responsiveness of the allocation process.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@volcano-sh-bot volcano-sh-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Dec 24, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a GPU resource double-counting bug by introducing a new function, addToPodMap, to handle resource accounting during the allocation phase. The core logic of the fix is sound. My review includes a couple of suggestions to improve the implementation of the new function for better readability, maintainability, and to correct a minor inconsistency in an error message.

func (gs *GPUDevices) addToPodMap(annotations map[string]string, pod *v1.Pod) {
ids, ok := annotations[AssignedIDsAnnotations]
if !ok {
klog.Errorf("pod %s has no annotation volcano.sh/devices-to-allocate", pod.Name)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message refers to the annotation volcano.sh/devices-to-allocate (AssignedIDsToAllocateAnnotations), but the code checks for AssignedIDsAnnotations (volcano.sh/vgpu-ids-new). This can be confusing when debugging. To improve clarity, the error message should refer to the annotation key that is actually being checked by using the constant.

Suggested change
klog.Errorf("pod %s has no annotation volcano.sh/devices-to-allocate", pod.Name)
klog.Errorf("pod %s has no annotation %s", pod.Name, AssignedIDsAnnotations)

Comment on lines 191 to 205
for index, gsdevice := range gs.Device {
if strings.Contains(deviceused.UUID, gsdevice.UUID) {
podUID := string(pod.UID)
_, ok := gsdevice.PodMap[podUID]
if !ok {
gsdevice.PodMap[podUID] = &GPUUsage{
UsedMem: 0,
UsedCore: 0,
}
}

gsdevice.PodMap[podUID].UsedMem += deviceused.Usedmem
gsdevice.PodMap[podUID].UsedCore += deviceused.Usedcores
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This loop can be improved for readability and to remove the unused index variable. The logic for initializing and updating gsdevice.PodMap can also be simplified. For a minor performance improvement, podUID could be calculated once outside these nested loops.

Additionally, there is significant code duplication with the addResource function. It would be beneficial to consider refactoring this into a shared helper function in a follow-up PR to improve maintainability.

            for _, gsdevice := range gs.Device {
                if strings.Contains(deviceused.UUID, gsdevice.UUID) {
                    podUID := string(pod.UID)
                    usage, ok := gsdevice.PodMap[podUID]
                    if !ok {
                        usage = &GPUUsage{}
                        gsdevice.PodMap[podUID] = usage
                    }

                    usage.UsedMem += deviceused.Usedmem
                    usage.UsedCore += deviceused.Usedcores
                }
            }

}

gsdevice.PodMap[podUID].UsedMem += deviceused.Usedmem
gsdevice.PodMap[podUID].UsedCore += deviceused.Usedcores
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not copy gs.AddPodMetrics(index, string(pod.UID), pod.Name) from addPod here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AddResource routine will do this.

@JesseStutler
Copy link
Member

Hi @sailorvii this PR has code-building failures, that's why all CIs failed, please fix it

@sailorvii sailorvii force-pushed the master branch 2 times, most recently from e00ec13 to e4540a0 Compare December 30, 2025 03:01
@volcano-sh-bot volcano-sh-bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Dec 30, 2025
@volcano-sh-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign kingeasternsun for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@volcano-sh-bot volcano-sh-bot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Dec 30, 2025
At allocate state, the function checkNodeGPUSharingPredicateAndScore will add the used mem/core to the structure. For avoiding the pod allocated info updating latency, we use addResource it to add the pod the pod map. But it also add the used mem/core. To address this, we new a function that only add pod the map.

Signed-off-by: chenw66 <chenw66@chinaunicom.cn>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants