Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming issues when watching ConfigMaps in .allNamespaces #31

Closed
MyIgel opened this issue Apr 27, 2023 · 7 comments
Closed

Streaming issues when watching ConfigMaps in .allNamespaces #31

MyIgel opened this issue Apr 27, 2023 · 7 comments

Comments

@MyIgel
Copy link

MyIgel commented Apr 27, 2023

When watching ConfigMaps by using .allNamespaces i see other events before the triggering one. This happens only in ConfigMaps in all namespaces, not in single namespaces or for example with deployments in all namespaces.

Tested on k8s v1.27.1
Package version 0.14.0

This is a script that i used to show the problem:

import Foundation
import SwiftkubeClient

@main
class KubeEvent {
    let kubeClient: KubernetesClient? = KubernetesClient()

    deinit {
        try? kubeClient?.syncShutdown()
    }

    static func main() {
        let m = KubeEvent()
        try? m.listen()

        RunLoop.main.run()
    }

    func listen() throws {

        /// Stream all deployments in all namespaces
        /// Works
        /*
        when running
        kubectl apply -f example-1.yml
        kubectl delete -f example-1.yml

        Deployments in all namespaces: ADDED test
        Deployments in all namespaces: MODIFIED test
        Deployments in all namespaces: MODIFIED test
        Deployments in all namespaces: MODIFIED test
        Deployments in all namespaces: MODIFIED test
        Deployments in all namespaces: DELETED test
        */
        Task {
            let task = try kubeClient!.appsV1.deployments.watch(in: .allNamespaces)
            for try await item in  task.start() {
                if let name = item.resource.name  {
                    print("Deployments in all namespaces: \(item.type.rawValue) \(name)")
                }
            }
        }

        /// Stream all configmaps in one namespace
        /// Works
        Task {
            let task = try kubeClient!.configMaps.watch(in: .default)
            for try await item in  task.start() {
                if let name = item.resource.name {
                    print("Configmaps in default namespace: \(item.type.rawValue) \(name)")
                }
            }
        }

        /// Stream all configmaps in all namespaces
        /// Has streaming issues
        /*
        when running
        kubectl apply -f example-2.yml
        kubectl delete -f example-2.yml

        Configmaps in default namespace: ADDED foo
        Configmaps in all namespaces: ADDED kube-proxy
        Configmaps in default namespace: DELETED foo
        Configmaps in all namespaces: ADDED kube-root-ca.crt
        */
        Task {
            let task = try kubeClient!.configMaps.watch(in: .allNamespaces)
            for try await item in  task.start() {
                if let name = item.resource.name  {
                    /// This shows the problem
                    print("Configmaps in all namespaces: \(item.type.rawValue) \(name)")
                }
            }
        }
    }
}

example-1.yml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      name: test
      labels:
        app: test
    spec:
      containers:
      - name: test
        image: nginx
        imagePullPolicy: IfNotPresent

example-2.yml

apiVersion: v1
kind: ConfigMap
metadata:
  name: foo
  namespace: test

It would be great to get some feedback if this is some problem with swiftkube or a implementation issue on my side.

@petershaw
Copy link

petershaw commented Apr 27, 2023

@iabudiab Thanks for the beautiful implementation of CRDs. Sadly the issue from @MyIgel also includes those as well:

let gvrTenants = GroupVersionResource(group: "example.com", version: "v1", resource: "tenants")
guard let watchTenants = try client?.for(TenantResource.self, gvr: gvrTenants).watch(in: .allNamespaces) else {
            throw CRDReaderError.initialize(reason: "Watch .allNamespaces", in: gvrTenants)
}
let streamTenants = watchTenants.start()
for try await event in streamTenants {
    [...]
}  

I think that the root cause could be the same. Wondering why this does not happens with deployments and pods, but with ConfigMaps and CRDs.

In advanced to @MyIgel post: it is not always the case that there is 1 item before the expected one.
ADDED foo is not guaranteed in my test run. In CRDs this may come, or may not. If I add some resource there may be some outputs (4 to 7) with just some old config maps. On a clean small test cluster I can reproduce it, but on a production environment with ~40 namespaces this seams not to be so predictable.

Hope the info helps to figure it out.
Thanks a lot.

@petershaw
Copy link

Can someone else reproduce it? By looking at the code on one side and the kubernetes event stream on the other side I am unable to explain what happens and why this happens. I see thet issue on different clusters. Looks like a Buffer Problem, but I can't find any problematic code at the first look.
Maybe @iabudiab can have a look and give us a hin?

@iabudiab
Copy link
Member

iabudiab commented May 3, 2023

@MyIgel @petershaw Hey there 👋

Thanks very much for the interest and for reporting bugs and contributing! Very much appreciated 🙏

The last couple of months I've had much less time than usual due to some family projects, so I've been less active here, sorry for that. However, the initial phase is almost finished and I should be getting more free time soon🤞

Back to the issue at hand:

I could reproduce (not consistently) the behavior you've described above on some clusters and now I'm trying to find / classify the underlying cause.

This "flaky" behavior is more apparent on Linux than on e.g. k3d running on Mac. If I'm not mistaken this started after updating/using k8s v1.22.x and got worse with 1.24.x

I'll report back here once I have more infos.

@iabudiab
Copy link
Member

Wow, what a wild-goose chase this was!

This had nothing to do with k8s version or the async client etc.

The problem was in the DataStreamer#L69

-  let line = streamingBuffer.withUnsafeReadableBytes { raw in
-      raw.firstIndex(of: UInt8(0x0A))
+  let lines = streamingBuffer.withUnsafeReadableBytes { raw in
+      raw.lastIndex(of: UInt8(0x0A))
}

It used to read only the first line of data and would emit the next line upon receiving the next buffer. So if we had two lines in the first buffer and two in the second, then only the first line of first buffer would be emitted initially, then the second line of the first buffer would be emitted upon receiving the next one, and so on.

It happened with small ConfigMaps and small CRDs and would happen with any resources small enough to fit in one server response.

I think this should be fixed now.

@MyIgel @petershaw I would appreciate it if you could test it on your end 😉

I'll be testing further too.

@petershaw
Copy link

Wow!
Thanks a lot. Amazing work. I will test the release next week. Explanation sounds promising.

@petershaw
Copy link

Hey @iabudiab,
sorry to bother again...
in one .namespace("test") it works perfectly. in .allNamespaces I still have the same issue with configmaps
Note: watching pods in all namespaces is working brilliant, I have only probs with configmaps.

@petershaw
Copy link

yap. Woks fine for me, now.
Thanks a lot @iabudiab

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants