fix(bigquery): make additional errors retriable: tcp timeout and http2 client connection lost#13269
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Summary of ChangesHello @MartinSahlen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the resilience of the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request provides a valuable fix for an important resilience issue in the BigQuery client. The detailed description accurately identifies that dial tcp: i/o timeout errors were not being automatically retried. The implemented change, which adds a check for timeout errors using the Timeout() method in the retryableError function, is the correct and modern approach in Go for handling such transient network issues. This small and targeted modification will significantly improve the client's robustness and user experience by handling common network timeouts automatically. The change is well-implemented and I have no further suggestions. It is ready for merging.
@MartinSahlen can you sign the CLA ? |
Yes I did shortly after submitting the PR and also retriggered the test which passed already. |
|
Thanks for the PR and the detailed writeup! One minor request: Could you add a testcase for this to TestRetryableErrors in bigquery_test.go? It looks like https://pkg.go.dev/net#DNSError gives us an easy to use example error for this case. |
Thanks! I can give it a go. Firstly I need to understand the test structure and how to set up some mock errors/responses, will give a shout if I get stuck. |
|
Hi @shollyman , so I added a test. However in the meantime we have observed one more error "http2: client connection lost", which I decided to also add to this PR with a corresponding test. Perhaps the PR title should change to reflect this when squash-merging. |
9d64469 to
add8366
Compare
|
hi @shollyman and @alvarowolfx, any next steps here? Or any idea of a timeline on your end? |
|
By the way, the storage client handles it through an extension point that gives the user the ability to supply a custom method to determine whether a given error is retryable, in addition to the built-in logic. https://github.com/googleapis/google-cloud-go/blob/storage/v1.57.1/storage/storage.go#L2528 I'm not sure what the overall SDK strategy is, but for bigquery, it might be nice to add some of these errors we have seen in the wild (as this change accomplishes), as well as future proofing with an extension point that lets users easily add their own retry cases. |
|
Apologies, I've been OOO and playing catchup. Taking another look now. |
|
Thanks again for the contribution! |
No worries! That being said, I think @joshk0's suggestion might be one to consider. We don't see these errors in the python clients, most likely because it represents 95% (or more) of users' interaction with BigQuery APIs, and such it has more robust error handling. Until we see "everything" and can make all errors retryable, having some ability for users to manually tell the library what should be retriable seems like a good stop-gap. |
PR created by the Librarian CLI to initialize a release. Merging this PR will auto trigger a release. Librarian Version: v0.8.0 Language Image: us-central1-docker.pkg.dev/cloud-sdk-librarian-prod/images-prod/librarian-go@sha256:01189c9771ac4150742aed38eb52e19a008018889066002742034b7f82db070f <details><summary>bigquery: 1.73.0</summary> ## [1.73.0](bigquery/v1.72.0...bigquery/v1.73.0) (2026-02-04) ### Features * add Stored Procedure Sharing support for analyticshub listings (PiperOrigin-RevId: 827828462) ([185951b](185951b3)) * add tags support for Pub/Sub subscriptions (PiperOrigin-RevId: 827828462) ([185951b](185951b3)) * Support picosecond timestamp precision in BigQuery Storage API (PiperOrigin-RevId: 829486853) ([185951b](185951b3)) * add timestamp precision support to schema (#13421) ([52020af](52020af5)) * transition format options (#13422) ([59efe32](59efe323)) ### Bug Fixes * make additional errors retriable: tcp timeout and http2 client connection lost (#13269) ([466d309](466d309d)) * roundtrip readonly fields (#13370) ([9e84705](9e847052)) ### Documentation * change comment indicating `enable_gemini_in_bigquery` field for BigQuery Reservation Assignments is deprecated (PiperOrigin-RevId: 850121797) ([35d7578](35d75787)) </details>
Description
The cloud.google.com/go/bigquery client does not automatically retry API calls that fail with a dial tcp: i/o timeout error. This type of error is a common transient network failure, especially in distributed cloud environments, and often occurs when initiating a connection.
The underlying Go error wrapper correctly identifies this as a retryable error (as seen by retryable: true in the error message), but the BigQuery client's internal retry predicate fails to catch it, immediately propagating the error to the user. This forces developers to build their own complex retry wrappers around the client, which should ideally be handled by the library's built-in resilience mechanisms.
Expected Behavior
When an API call (such as
jobs.insertorjobs.query) fails with adial tcp: i/o timeout, the client library should recognize this as a transient, retryable error and automatically retry the operation using its built-in exponential backoff strategy.Actual Behavior
The API call fails immediately and returns the i/o timeout error directly to the caller. No retry is attempted by the library.
The full error message is similar to the following:
Code Snippet
The issue can be observed with any standard API call that initiates a network request. For example, when using a Loader to start a job:
Additional Context & Analysis
The root cause appears to be in the library's internal
retryableErrorpredicate. This function does not check for errors that satisfy thenet.Error Timeout()method.The current implementation checks for
interface{ Temporary() bool }:However, a
dial tcp: i/o timeoutis anet.ErrorwhereTimeout()returnstrue, butTemporary()may not. TheTemporary()method was deprecated in Go 1.18 because its definition was ambiguous and ill-defined. Most errors that were once "temporary" are now more accurately classified as timeouts.Because the library's predicate relies on this deprecated method and omits a check for the
Timeout()method, it fails to identify one of the most common types of transient network errors.The proposed fix in this PR is to update the
retryableErrorpredicate to also include a check for timeout errors, for example:Adding this case will improve the client's resilience and align its behavior with the expectation that transient network timeouts are handled automatically.
Hoping for positive feedback on this one and that we can get it merged quickly. Cheers!