Skip to content

feat: Support decode chunk PD serving mode #944

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

zhhangBian
Copy link

  • Add a new arg pd_chunk_size to decide the chunk size. 0 means no chunk
  • Support decode chunk

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @zhhangBian, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces support for chunked decoding in the PD serving mode, allowing the LightLLM server to generate and return tokens in configurable batches. This enhancement aims to provide more granular control over token streaming, potentially improving responsiveness and resource utilization for long generation sequences. The changes involve adding a new configuration parameter and a significant refactor of the token generation loop within the PD master's request handling.

Highlights

  • New Configuration Option: A new command-line argument, --pd_chunk_size, has been added to lightllm/server/api_cli.py. This integer argument, defaulting to 0 (meaning no chunking), allows users to specify the desired chunk size for token generation in the PD (Pipelined/Parallel Decoding) serving mode.
  • Chunked Decoding Logic Implementation: The core logic for chunked decoding has been implemented and refactored within lightllm/server/httpserver_for_pd_master/manager.py. The fetch_stream function now iteratively generates tokens in chunks, dynamically adjusting sampling_params.max_new_tokens for each chunk and carefully managing the finish_status to differentiate between chunk completion and overall request completion.
  • Example Server and Test Scripts: New shell scripts (server_d.sh, server_master.sh, server_p.sh, test.sh) have been added to provide clear examples of how to set up and run the LightLLM server in different modes, including the PD master mode with the new --pd_chunk_size argument, and how to use a benchmark client against it.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces support for decoding chunk PD serving mode by adding a new argument pd_chunk_size and modifying the fetch_stream function to handle chunking. The changes seem reasonable, but there are a couple of areas where additional checks and simplification could improve the code's robustness and readability.

Comment on lines +261 to +262
for sub_req_id, request_output, metadata, finish_status in token_list:
current_prompt_ids.append(metadata.get("id"))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding a check to ensure metadata.get("id") is not None before appending it to current_prompt_ids. This will prevent potential None values from being added to the list, which could lead to unexpected behavior later on. A medium severity is assigned because while the code might work in most cases, the absence of this check introduces a risk of runtime errors.

                        if metadata.get("id") is not None:
                            current_prompt_ids.append(metadata.get("id"))

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the code itself can ensure metadata.get("id") is not None. This fix is not necessary.

@zhhangBian
Copy link
Author

I will remove the unnecessary files and changes later if the code review is all right.

The core changes is in the following two files:

  • lightllm/server/api_cli.py
  • lightllm/server/httpserver_for_pd_master/manager.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant