Skip to content

Conversation

@alkatrivedi
Copy link
Contributor

No description provided.

@alkatrivedi alkatrivedi requested review from a team as code owners November 24, 2025 08:49
@product-auto-label product-auto-label bot added size: xs Pull request size is extra small. api: spanner Issues related to the googleapis/nodejs-spanner API. labels Nov 24, 2025
@gemini-code-assist
Copy link

Summary of Changes

Hello @alkatrivedi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a flakiness issue in a metric test by refining an assertion logic. The change ensures that the test accurately reflects the expected relationship between attempt and operation latencies, improving the reliability of the test suite and preventing intermittent failures.

Highlights

  • Metric Test Fix: Corrected an assertion in test/metrics/metrics.ts to compare attemptLatency against operationLatency instead of MIN_LATENCY, resolving a flaky test scenario by aligning the assertion with the test's intent.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a flaky test by correctly asserting that attemptLatency should be approximately equal to operationLatency when there is only a single attempt. The change is correct and improves test stability.

I've identified a similar issue in another test within the same file. The test 'should have correct latency values in metrics except AFE when AFE Server timing is disabled' in test/metrics/metrics.ts has the same logic on line 713 that was just fixed. It still compares attemptLatency with MIN_LATENCY. To prevent future flakiness, this should also be updated to compare against operationLatency.

@alkatrivedi alkatrivedi marked this pull request as draft November 24, 2025 08:52
@alkatrivedi alkatrivedi force-pushed the fix-flaky-metric-test branch from 59941d3 to 24a2d5b Compare November 24, 2025 09:02
@alkatrivedi alkatrivedi added the owlbot:run Add this label to trigger the Owlbot post processor. label Nov 24, 2025
@alkatrivedi alkatrivedi marked this pull request as ready for review November 24, 2025 09:48
@gcf-owl-bot gcf-owl-bot bot removed the owlbot:run Add this label to trigger the Owlbot post processor. label Nov 24, 2025
);
// Since we only have one attempt, the attempt latency should be fairly close to the operation latency
assertApprox(MIN_LATENCY, attemptLatency, 30);
assertApprox(operationLatency, attemptLatency, 30);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 2 test cases which uses this MIN_LATENCY, both needs to be corrected

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


// check that the latency matches up with the measured elapsed time within 10ms
assertApprox(elapsedTime, totalOperationLatency, 10);
assertApprox(elapsedTime, totalOperationLatency, 50);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem right to me, Unit test should not take this long, ideally elapsed time should always be closer to totalOperationLatency.
In your testing what behavior do you see, totalOperationLatency is bigger or elapsed time ?
Can you add logs and see what all operationLatency value is coming for different methods vs the total latency.

We need to see for this test case if createSession is called when database.run is called or before that ? Maybe that is why we see this discrepancy now with mux session.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check what all "resourceMetrics" contains, is it having createSession or not and how it differs from regular sessio

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In your testing what behavior do you see, totalOperationLatency is bigger or elapsed time ?

elapsedTime is more than the totalOperationLatency. Ref - https://github.com/googleapis/nodejs-spanner/actions/runs/19628416394/job/56202180201

@alkatrivedi alkatrivedi force-pushed the fix-flaky-metric-test branch from 24a2d5b to dde6277 Compare November 24, 2025 10:17
@alkatrivedi alkatrivedi added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Nov 24, 2025
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Nov 24, 2025
@alkatrivedi alkatrivedi force-pushed the fix-flaky-metric-test branch from dde6277 to b02dc21 Compare November 24, 2025 10:52
@alkatrivedi alkatrivedi force-pushed the fix-flaky-metric-test branch from b02dc21 to 2548c4d Compare November 24, 2025 11:11
@alkatrivedi alkatrivedi added kokoro:force-run Add this label to force Kokoro to re-run the tests. owlbot:run Add this label to trigger the Owlbot post processor. labels Nov 24, 2025
@gcf-owl-bot gcf-owl-bot bot removed the owlbot:run Add this label to trigger the Owlbot post processor. label Nov 24, 2025
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Nov 24, 2025
@alkatrivedi alkatrivedi merged commit e169cc5 into main Nov 24, 2025
23 of 24 checks passed
@alkatrivedi alkatrivedi deleted the fix-flaky-metric-test branch November 24, 2025 11:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: spanner Issues related to the googleapis/nodejs-spanner API. size: xs Pull request size is extra small.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants