-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to reduce the retry count for python client? #962
Comments
this was fixed in upstream code generator: #780 (comment). ref #652 the latest version of this client was generated from openapi-generator 3.3.4, which didn't include the fix yet. It will be included when we do a release with a newer version. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is there any update?? |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is this going to get fixed in the next release? I'm currently using /remove-lifecycle rotten |
The upstream fix was included in openapi-generator 4.0.0 OpenAPITools/openapi-generator#2460 Currently this repo is still using openapi-generator 3.3.4 We could either evaluate and use the newer version, or cherrypick the fix to this repo. cc @palnabarun /help |
@roycaihw: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@roycaihw I did evaluation of using the latest version of the openapi-generator. It looks promising, I faced some problems with a validator so far (kubernetes-client/gen#145). There are a lot of new features, fixes and some breaking changes so we should upgrade the generator in the next release. |
Thanks @tomplus. I think after we release 11.0.0 stable version, we can start evaluating the latest version of the openapi-generator in 12.0.0a1. cc @scottilee |
I am also interested in this feature |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Now that release 11.0.0 is out, has there been any work done on upgrading the version of openapi-generator used? /remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Release 12.0.0 is out now. Has this been looked into yet? I'm still interested in this getting fixed. /remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I'm still interested in seeing this resolved. /remove-lifecycle stale |
like most API clients, i need to control timeouts and retries. /remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
I'd still like to see a fix for this. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Please don't close yet, unless it's already fixed? /remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Still needed. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
I'd still like this. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Any update on this? |
@devopstales: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Any update on this? /reopen |
@tathagatk22: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened (please include outputs or screenshots):
I am having a config file of a dead/deleted/inaccessible cluster.
I am trying to accessing the cluster using kubernetes-client-python.
By default kube-client retries it 3 times,
After 3 retries it throws an exception.
Is there any way to reduce the count.
Example Code
What you expected to happen:
I need to configure the retry count for inaccessible/accessible k8s cluster.
How to reproduce it (as minimally and precisely as possible):
Delete the kubernetes cluster, then try to access the cluster using kubernetes-client-python.
By default it will perform retry for 3 times.
Anything else we need to know?:
Environment:
Kubernetes version (
kubectl version
):Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: dial tcp 149.129.128.208:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
OS: Windows 10
Python version (
python --version
) : Python 2.7.14Python client version (
pip list | grep kubernetes
): kubernetes 10.0.1The text was updated successfully, but these errors were encountered: