-
Notifications
You must be signed in to change notification settings - Fork 46
[VertexAI] Support cancellation in GenerateContent #1239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @a-maurice, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request focuses on adding cancellation support to the GenerateContent
methods within the VertexAI library. The primary change involves introducing a CancellationToken
parameter to the GenerateContentAsync
and GenerateContentStreamAsync
methods in both the Chat.cs
and GenerativeModel.cs
files. Additionally, the original methods that accepted a variable number of ModelContent
parameters (params ModelContent[] content
) have been replaced with single ModelContent content
and IEnumerable<ModelContent> content
overloads, both of which now accept an optional CancellationToken
. This enhancement allows users to cancel long-running content generation tasks, improving the responsiveness and control of their applications. The test app was also updated to use the new signatures.
Highlights
- Cancellation Support: Introduces
CancellationToken
toGenerateContentAsync
andGenerateContentStreamAsync
methods, allowing users to cancel long-running operations. - Method Signature Changes: Replaces
params ModelContent[] content
overloads withModelContent content
andIEnumerable<ModelContent> content
overloads, both accepting an optionalCancellationToken
. - Internal Method Updates: Updates internal methods to accept and pass the
CancellationToken
to the underlying HTTP client calls. - Test App Updates: Updates the test app to use the new method signatures.
Changelog
Click here to see the changelog
- vertexai/src/Chat.cs
- Added
using System.Runtime.CompilerServices;
andusing System.Threading;
to support cancellation tokens. - Modified
SendMessageAsync
andSendMessageStreamAsync
methods to accept aCancellationToken
. - Removed the
params ModelContent[] content
overloads ofSendMessageAsync
andSendMessageStreamAsync
. - Updated internal methods
SendMessageAsyncInternal
andSendMessageStreamAsyncInternal
to accept and pass theCancellationToken
.
- Added
- vertexai/src/GenerativeModel.cs
- Added
using System.Runtime.CompilerServices;
andusing System.Threading;
to support cancellation tokens. - Modified
GenerateContentAsync
,GenerateContentStreamAsync
, andCountTokensAsync
methods to accept aCancellationToken
. - Removed the
params ModelContent[] content
overloads ofGenerateContentAsync
,GenerateContentStreamAsync
, andCountTokensAsync
. - Updated internal methods
GenerateContentAsyncInternal
,GenerateContentStreamAsyncInternal
, andCountTokensAsyncInternal
to accept and pass theCancellationToken
. - The
SendAsync
calls in the internal methods now include the cancellation token.
- Added
- vertexai/testapp/Assets/Firebase/Sample/VertexAI/UIHandlerAutomated.cs
- Updated calls to
model.GenerateContentAsync
to use thenew ModelContent[] { ... }
syntax.
- Updated calls to
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
A token's call,
A task finds its release,
Control's gentle hand.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces cancellation token support to the GenerateContent calls, enhancing the flexibility and responsiveness of the VertexAI SDK. The changes involve modifying the method signatures to accept an optional CancellationToken parameter and passing it down to the underlying API calls. Overall, the changes seem well-structured and address the intended functionality.
Summary of Findings
- Inconsistent documentation of 'content' parameter: The documentation for the
content
parameter inChat.cs
andGenerativeModel.cs
is inconsistent. Some methods describe it as 'The input given to the model as a prompt,' while others say 'The input(s) given to the model as a prompt.' This inconsistency should be resolved for clarity. - Missing cancellation token in SendMessageAsyncInternal in Chat.cs: The
SendMessageAsyncInternal
method inChat.cs
does not pass thecancellationToken
to thegenerativeModel.GenerateContentAsync
method, which could lead to un-cancellable operations.
Merge Readiness
The pull request introduces a valuable feature by adding cancellation token support. However, there are a couple of issues that need to be addressed before merging. Specifically, the documentation for the content
parameter should be consistent across all methods, and the cancellationToken
should be passed to the generativeModel.GenerateContentAsync
method in SendMessageAsyncInternal
in Chat.cs
. I am unable to approve this pull request, and recommend that it not be merged until these issues are addressed (at a minimum), and that others review and approve this code before merging.
Description
Support providing a CancellationToken to the various GenerateContent calls. Removes the version with params, instead supporting a single input, or a collection of inputs, along with the optional cancellation token.
Testing
Running the tests locally.
Type of Change
Place an
x
the applicable box: