Description
Hello, this is more support question that a bug (i hope so).
I'm using client-go in small app that calls cluster API and checks nodes in NotReady state (like cloud-provider code does). I've stuck with a problem that after pod (guest cluster kubernetes master vm) hard killed, i have dead tcp connection in my pod. Client-go tries to reuse it for next 10-17 minutes until TCP is dropped.
To workaround this problem (and not mess with deep networking stuff), i've tried to reinitialize client for every check i do. But i found out that even in that case client stiff reuses same dead tcp connection. Is it possible?
I expect reinitializing new client will initiate brand new tcp connection.
Details:
Here i'm initializing client. Function MonitorNode called every 30 seconds.
logs
netstat shows that connection to 172.31.53.156 is broken