-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streaming issues when watching ConfigMaps in .allNamespaces #31
Comments
@iabudiab Thanks for the beautiful implementation of CRDs. Sadly the issue from @MyIgel also includes those as well: let gvrTenants = GroupVersionResource(group: "example.com", version: "v1", resource: "tenants")
guard let watchTenants = try client?.for(TenantResource.self, gvr: gvrTenants).watch(in: .allNamespaces) else {
throw CRDReaderError.initialize(reason: "Watch .allNamespaces", in: gvrTenants)
}
let streamTenants = watchTenants.start()
for try await event in streamTenants {
[...]
} I think that the root cause could be the same. Wondering why this does not happens with deployments and pods, but with ConfigMaps and CRDs. In advanced to @MyIgel post: it is not always the case that there is 1 item before the expected one. Hope the info helps to figure it out. |
Can someone else reproduce it? By looking at the code on one side and the kubernetes event stream on the other side I am unable to explain what happens and why this happens. I see thet issue on different clusters. Looks like a Buffer Problem, but I can't find any problematic code at the first look. |
@MyIgel @petershaw Hey there 👋 Thanks very much for the interest and for reporting bugs and contributing! Very much appreciated 🙏
Back to the issue at hand: I could reproduce (not consistently) the behavior you've described above on some clusters and now I'm trying to find / classify the underlying cause. This "flaky" behavior is more apparent on Linux than on e.g. k3d running on Mac. If I'm not mistaken this started after updating/using k8s v1.22.x and got worse with 1.24.x I'll report back here once I have more infos. |
Wow, what a wild-goose chase this was! This had nothing to do with k8s version or the async client etc. The problem was in the DataStreamer#L69 - let line = streamingBuffer.withUnsafeReadableBytes { raw in
- raw.firstIndex(of: UInt8(0x0A))
+ let lines = streamingBuffer.withUnsafeReadableBytes { raw in
+ raw.lastIndex(of: UInt8(0x0A))
} It used to read only the first line of data and would emit the next line upon receiving the next buffer. So if we had two lines in the first buffer and two in the second, then only the first line of first buffer would be emitted initially, then the second line of the first buffer would be emitted upon receiving the next one, and so on. It happened with small ConfigMaps and small CRDs and would happen with any resources small enough to fit in one server response. I think this should be fixed now. @MyIgel @petershaw I would appreciate it if you could test it on your end 😉 I'll be testing further too. |
Wow! |
Hey @iabudiab, |
yap. Woks fine for me, now. |
When watching ConfigMaps by using .allNamespaces i see other events before the triggering one. This happens only in ConfigMaps in all namespaces, not in single namespaces or for example with deployments in all namespaces.
Tested on k8s
v1.27.1
Package version
0.14.0
This is a script that i used to show the problem:
example-1.yml
example-2.yml
It would be great to get some feedback if this is some problem with swiftkube or a implementation issue on my side.
The text was updated successfully, but these errors were encountered: