-
-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kopf stops receiving namespace events #232
Comments
kopf-archiver
bot
changed the title
[archival placeholder]
Kopf stops receiving namespace events
Aug 19, 2020
Just to keep this issue ticking along since the project move to the I'm on Kubernetes v1.18.6, using Kopf v0.27, and have observed this issue.
|
MarkusH
added a commit
to crate/crate-operator
that referenced
this issue
Nov 2, 2020
Every now and then, the operator might get stuck when watching for events on the K8s API. By setting some timeouts this can be mitigated. Refs nolar/kopf#232 Refs https://kopf.readthedocs.io/en/latest/configuration/#api-timeouts
1 task
mergify bot
pushed a commit
to crate/crate-operator
that referenced
this issue
Nov 2, 2020
Every now and then, the operator might get stuck when watching for events on the K8s API. By setting some timeouts this can be mitigated. Refs nolar/kopf#232 Refs https://kopf.readthedocs.io/en/latest/configuration/#api-timeouts
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Expected Behavior
Kopf should actively receive all namespace events.
Actual Behavior
Kopf receives events for a while and then stops receiving events. Neither
create
,update
anddelete
events handlers are triggered nor do the events show up in the rawevent
handler.Steps to Reproduce the Problem
Specifications
logicfox Can you please add the Kopf's version too?
pip freeze | grep kopf
orkopf --version
Sure
Maybe a duplicate of #204 #142 (not certain though).
logicfox Can you please try it with
kopf>=0.23rc2
? Specifically,kopf==0.23rc1
switches all the I/O internally to asyncio+aiohttp (#227). This already solved some issues with the synchronous sockets freezing in some cases, and maybe solves all the other issues with similar symptoms.Please, be aware of the massive changes in this RC (see 0.23rc1 & optionally 0.23rc2 release notes) if you have a pre-existing operator, which can be affected — though, in theory, it should be fully backward compatible and safe, but who knows what can break in practice.
nolar Sorry, I couldn't test this earlier. But it looks like the problem is still there in the
master
branch.watch
seems to freeze after a while. I'm going to test this with the raw Kubernetes Python client to see if it's an issue with my cluster.We experienced the same issue until we upgraded Kubernetes to 1.15.10 in AKS. In addition I changed the version of Kopf from 0.25 to 0.26.
To the situation before: I noticed that events for CRDs were still being received.
Not sure if this is related,on eks@0.15 and kopf@0.26 I tried :
Results in
In the same promt,
kubectl get namespaces
works....upgraded kopf to 27rc5, got it working with:
By default there is no timeout on timeoutSeconds for watch session is not set neither in kopf https://github.com/nolar/kopf/blob/master/kopf/structs/configuration.py#L68 or kubernetes API https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/ as result the session might stuck forewer. setting watching.server_timeout to some value might help here. It is important to set server_timeout to value less than watching.client_timeout (which is aiohttp session global timeout)
@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
settings.watching.server_timeout = 300
I think not only watching session might stuck, as other calls doesn't have default timeout configured. I've proposed to set timeouts globally per aiohttp session #377 but looks like it is not possible to override settings in the way propowed in patch, so it have to be updated.
The text was updated successfully, but these errors were encountered: