Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HNC: TLS handshake error from X.X.X.X:YYYY: EOF #49

Closed
vikas027 opened this issue Jun 18, 2021 · 12 comments
Closed

HNC: TLS handshake error from X.X.X.X:YYYY: EOF #49

vikas027 opened this issue Jun 18, 2021 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@vikas027
Copy link

Environment

  • EKS v1.20.6
  • HNC 0.8.0

Problem

HNC controller pod (hnc-controller-manager-xxxxxx) is working fine but is throwing a lot of these errors.

{"level":"error","ts":1623959262.9720078,"msg":"http: TLS handshake error from 10.50.114.121:33364: EOF"}
{"level":"error","ts":1623959262.9722204,"msg":"http: TLS handshake error from 10.50.114.121:33362: EOF"}
{"level":"error","ts":1623959262.973659,"msg":"http: TLS handshake error from 10.50.114.121:33360: EOF"}
{"level":"error","ts":1623961441.4681904,"msg":"http: TLS handshake error from 10.50.114.121:49344: EOF"}
{"level":"error","ts":1623966001.330808,"msg":"http: TLS handshake error from 10.50.114.121:54676: EOF"}
{"level":"error","ts":1623966658.7782638,"msg":"http: TLS handshake error from 10.50.165.218:38060: EOF"}
{"level":"error","ts":1623966658.7784226,"msg":"http: TLS handshake error from 10.50.165.218:38058: EOF"}
{"level":"error","ts":1623967363.0862052,"msg":"http: TLS handshake error from 10.50.114.121:36362: EOF"}

Similar Issues

@adrianludwin adrianludwin transferred this issue from kubernetes-retired/multi-tenancy Jun 18, 2021
@adrianludwin
Copy link
Contributor

Thanks @vikas027 . Turning these into human-readable dates, I get:

Thu Jun 17 2021 19:47:42 (x3)
Thu Jun 17 2021 20:24:01
Thu Jun 17 2021 21:40:01
Thu Jun 17 2021 21:50:58 (x2)
Thu Jun 17 2021 22:02:43

So maybe about one or two events an hour, with each "event" sometimes showing a couple of messages.

Is this correlated to any other event with HNC? E.g. the pod restarting or anything?

The "EOF" got me suspicious that a connection was just getting dropped, and I found this SO comment suggesting that TCP keepalive will fix it. I wonder if it's a similar issue, given that the reporter only noticed it when the connection had been idle for a long time. I could imagine HNC going idle for a while too if it has nothing much to do.

Another possibility is raising the MTU.

@vikas027 I don't know much (well, anything) about EKS, is it possible to modify any of these params on your cluster and see if it helps?

Failing that, a lot of these messages are created by controller-runtime so it would be tricky for us to try to filter them out.

@vikas027
Copy link
Author

Hey @adrianludwin ,
I am using Bottlerocket as EKS worker nodes where things are pretty locked down and have been optimized to run Kubernetes workloads. I have a lot of applications running and I am not seeing such errors anywhere else, not sure if this an EKS issue.

Is there a way to increase the verbosity (debug) of the HNC logs?

@adrianludwin
Copy link
Contributor

Hmm, no idea. I take it the problem persists?

Can you attach logs (double-check there's no information in there that you don't mind leaking, e.g. namespace names, or just rename them all to "ns1," "ns2" etc) as well as the YAML of the pod? Also, if you could figure out what's at the IP address that's mentioned in the error, that would also be helpful. It seems to be oscillating between two IP addresses (10.50.165.218 and 10.50.114.121) but without ever repeating the ports, which is odd.

Finally, if you could confirm that there are, or are not, pod restarts that are correlated with these messages, that might be another clue. Perhaps that's included in your logs.

@vikas027
Copy link
Author

Hello @adrianludwin ,
I have further digged into those IPs, those are from the default kubernetes endpoint which kubernetes creates in the default namespace. Please note, I have two clusters behaving exactly and none of them has no workloads running in the default namespace.

Not sure where else can I look at :)

---
apiVersion: v1
kind: Endpoints
metadata:
  labels:
    endpointslice.kubernetes.io/skip-mirror: "true"
  name: kubernetes
  namespace: default
subsets:
- addresses:
  - ip: 10.50.165.218
  - ip: 10.50.114.121
  ports:
  - name: https
    port: 443
    protocol: TCP

Here are some recent logs

{"level":"error","ts":1624797171.1220329,"msg":"http: TLS handshake error from 10.50.114.121:43166: EOF"}
{"level":"error","ts":1624798611.290382,"msg":"http: TLS handshake error from 10.50.114.121:55270: EOF"}
{"level":"error","ts":1624798611.2905447,"msg":"http: TLS handshake error from 10.50.114.121:55268: EOF"}
{"level":"error","ts":1624799871.090236,"msg":"http: TLS handshake error from 10.50.114.121:37874: EOF"}
{"level":"error","ts":1624799871.0998201,"msg":"http: TLS handshake error from 10.50.114.121:37878: EOF"}
{"level":"error","ts":1624802571.0674863,"msg":"http: TLS handshake error from 10.50.114.121:60824: EOF"}
{"level":"error","ts":1624802931.1300197,"msg":"http: TLS handshake error from 10.50.114.121:35658: EOF"}
{"level":"error","ts":1624804699.2879634,"msg":"http: TLS handshake error from 10.50.165.218:43884: EOF"}
{"level":"info","ts":1624805372.6467125,"logger":"cert-rotation","msg":"Ensuring CA cert","name":"hnc-validating-webhook-configuration","gvk":"admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration","name":"hnc-validating-webhook-configuration","gvk":"admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration"}
{"level":"error","ts":1624806491.543539,"msg":"http: TLS handshake error from 10.50.165.218:57936: EOF"}
{"level":"error","ts":1624812291.0658677,"msg":"http: TLS handshake error from 10.50.114.121:59380: EOF"}
{"level":"error","ts":1624812291.3288004,"msg":"http: TLS handshake error from 10.50.114.121:59388: EOF"}
{"level":"error","ts":1624814029.6276171,"msg":"http: TLS handshake error from 10.50.114.121:46364: EOF"}
{"level":"error","ts":1624814091.317666,"msg":"http: TLS handshake error from 10.50.114.121:46876: EOF"}
{"level":"info","ts":1624815480.4014397,"logger":"cert-rotation","msg":"Ensuring CA cert","name":"hnc-validating-webhook-configuration","gvk":"admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration","name":"hnc-validating-webhook-configuration","gvk":"admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration"}
{"level":"error","ts":1624815622.1466093,"msg":"http: TLS handshake error from 10.50.114.121:59614: EOF"}
{"level":"error","ts":1624817871.229798,"msg":"http: TLS handshake error from 10.50.114.121:50308: EOF"}
{"level":"error","ts":1624818411.2999148,"msg":"http: TLS handshake error from 10.50.114.121:54924: EOF"}
{"level":"error","ts":1624819131.3163505,"msg":"http: TLS handshake error from 10.50.114.121:60852: EOF"}
{"level":"error","ts":1624820751.2582116,"msg":"http: TLS handshake error from 10.50.114.121:46284: EOF"}
{"level":"error","ts":1624820751.3118231,"msg":"http: TLS handshake error from 10.50.114.121:46288: EOF"}
{"level":"error","ts":1624820751.400276,"msg":"http: TLS handshake error from 10.50.114.121:46292: EOF"}
{"level":"error","ts":1624820751.400447,"msg":"http: TLS handshake error from 10.50.114.121:46294: EOF"}
{"level":"error","ts":1624822526.5999544,"msg":"http: TLS handshake error from 10.50.165.218:44264: EOF"}
{"level":"error","ts":1624825791.5147407,"msg":"http: TLS handshake error from 10.50.114.121:60974: EOF"}
{"level":"error","ts":1624825971.2210906,"msg":"http: TLS handshake error from 10.50.114.121:34206: EOF"}
{"level":"error","ts":1624825971.4480453,"msg":"http: TLS handshake error from 10.50.114.121:34222: EOF"}
{"level":"error","ts":1624826040.053663,"msg":"http: TLS handshake error from 10.50.114.121:34792: EOF"}
{"level":"error","ts":1624826619.948627,"msg":"http: TLS handshake error from 10.50.165.218:48310: EOF"}
{"level":"error","ts":1624829031.4745123,"msg":"http: TLS handshake error from 10.50.114.121:60460: EOF"}
{"level":"error","ts":1624829031.599868,"msg":"http: TLS handshake error from 10.50.114.121:60466: EOF"}
{"level":"error","ts":1624829571.524453,"msg":"http: TLS handshake error from 10.50.114.121:36800: EOF"}
{"level":"error","ts":1624829751.2505043,"msg":"http: TLS handshake error from 10.50.114.121:38314: EOF"}
{"level":"error","ts":1624830291.499894,"msg":"http: TLS handshake error from 10.50.114.121:42838: EOF"}
{"level":"error","ts":1624831191.427693,"msg":"http: TLS handshake error from 10.50.114.121:50540: EOF"}
{"level":"error","ts":1624832631.4050016,"msg":"http: TLS handshake error from 10.50.114.121:34386: EOF"}
{"level":"error","ts":1624833105.3434877,"msg":"http: TLS handshake error from 10.50.114.121:38356: EOF"}
{"level":"error","ts":1624834824.3399158,"msg":"http: TLS handshake error from 10.50.114.121:52850: EOF"}
{"level":"error","ts":1624834824.3400981,"msg":"http: TLS handshake error from 10.50.114.121:52856: EOF"}

And here is the my pod configuration

apiVersion: v1
kind: Pod
metadata:
  annotations:
    app.kubernetes.io/managed-by: argocd
    kubernetes.io/psp: eks.privileged
  labels:
    app: hnc
    control-plane: controller-manager
    pod-template-hash: 5c8bd48cb
    type: platform
  name: hnc-controller-manager-5c8bd48cb-rzjpq
  namespace: hnc-system
spec:
  containers:
  - args:
    - --webhook-server-port=9443
    - --metrics-addr=:8080
    - --max-reconciles=10
    - --apiserver-qps-throttle=50
    - --enable-internal-cert-management
    - --cert-restart-on-secret-refresh
    - --excluded-namespace=kube-system
    - --excluded-namespace=kube-public
    - --excluded-namespace=hnc-system
    - --excluded-namespace=kube-node-lease
    command:
    - /manager
    image: gcr.io/k8s-staging-multitenancy/hnc-manager:v0.8.0
    name: manager
    ports:
    - containerPort: 9443
      name: webhook-server
    resources:
      limits:
        cpu: 100m
        memory: 300Mi
      requests:
        memory: 150Mi
    volumeMounts:
    - mountPath: /tmp/k8s-webhook-server/serving-certs
      name: cert
      readOnly: true
  - args:
    - --upstream=http://127.0.0.1:8080/
    - --secure-listen-address=0.0.0.0:8443
    - --logtostderr=true
    - --v=10
    image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0
    name: kube-rbac-proxy
    ports:
    - containerPort: 8443
      name: https
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: default
  terminationGracePeriodSeconds: 10
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: cert
    secret:
      secretName: hnc-webhook-server-cert

@adrianludwin
Copy link
Contributor

adrianludwin commented Jun 28, 2021 via email

@vikas027
Copy link
Author

Those are the complete (not filtered) logs from those times?

Yes, those are unfiltered logs. No other logs in that period.

Can you add the Status from your pod? Also, the
complete yaml for the Services and Endpoints in the hnc-system namespace

❯ k get po -o wide
NAME                                     READY   STATUS    RESTARTS   AGE    IP              NODE                                               NOMINATED NODE   READINESS GATES
hnc-controller-manager-5c8bd48cb-rzjpq   2/2     Running   0          5d3h   10.51.161.249   ip-10-51-168-218.ap-southeast-2.compute.internal   <none>           <none>

❯ k get po hnc-controller-manager-5c8bd48cb-rzjpq -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    app.kubernetes.io/managed-by: argocd
    kubernetes.io/psp: eks.privileged
  creationTimestamp: "2021-06-23T00:29:16Z"
  generateName: hnc-controller-manager-5c8bd48cb-
  labels:
    app: hnc
    control-plane: controller-manager
    pod-template-hash: 5c8bd48cb
    type: platform
  name: hnc-controller-manager-5c8bd48cb-rzjpq
  namespace: hnc-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: hnc-controller-manager-5c8bd48cb
    uid: 44e8dd68-55fa-46d9-a53b-753bb16cf073
  resourceVersion: "21376230"
  uid: d73958ae-98be-4653-bd06-69ab38ae474d
spec:
  containers:
  - args:
    - --webhook-server-port=9443
    - --metrics-addr=:8080
    - --max-reconciles=10
    - --apiserver-qps-throttle=50
    - --enable-internal-cert-management
    - --cert-restart-on-secret-refresh
    - --excluded-namespace=kube-system
    - --excluded-namespace=kube-public
    - --excluded-namespace=hnc-system
    - --excluded-namespace=kube-node-lease
    command:
    - /manager
    image: gcr.io/k8s-staging-multitenancy/hnc-manager:v0.8.0
    imagePullPolicy: IfNotPresent
    name: manager
    ports:
    - containerPort: 9443
      name: webhook-server
      protocol: TCP
    resources:
      limits:
        cpu: 100m
        memory: 300Mi
      requests:
        cpu: 100m
        memory: 150Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /tmp/k8s-webhook-server/serving-certs
      name: cert
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-lq64d
      readOnly: true
  - args:
    - --upstream=http://127.0.0.1:8080/
    - --secure-listen-address=0.0.0.0:8443
    - --logtostderr=true
    - --v=10
    image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0
    imagePullPolicy: IfNotPresent
    name: kube-rbac-proxy
    ports:
    - containerPort: 8443
      name: https
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-lq64d
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: ip-10-51-168-218.ap-southeast-2.compute.internal
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 10
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: cert
    secret:
      defaultMode: 420
      secretName: hnc-webhook-server-cert
  - name: default-token-lq64d
    secret:
      defaultMode: 420
      secretName: default-token-lq64d
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-06-23T00:29:16Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-06-23T00:29:31Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-06-23T00:29:31Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-06-23T00:29:16Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://7321aad826392ea799c38ba6a1086a66ae5461c0cbbfddcfbd713fbcd402e6eb
    image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0
    imageID: gcr.io/kubebuilder/kube-rbac-proxy@sha256:297896d96b827bbcb1abd696da1b2d81cab88359ac34cce0e8281f266b4e08de
    lastState: {}
    name: kube-rbac-proxy
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-06-23T00:29:31Z"
  - containerID: containerd://caf4523846d863459efaa7ea880c2c9d2a626616f25526855d573c86d836a483
    image: gcr.io/k8s-staging-multitenancy/hnc-manager:v0.8.0
    imageID: gcr.io/k8s-staging-multitenancy/hnc-manager@sha256:4a0c82d8ee6c0872628298e4486bd414b903a890f296cd5d825b9124dc7e913e
    lastState: {}
    name: manager
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-06-23T00:29:26Z"
  hostIP: 10.51.168.218
  phase: Running
  podIP: 10.51.161.249
  podIPs:
  - ip: 10.51.161.249
  qosClass: Burstable
  startTime: "2021-06-23T00:29:16Z"

❯ k get svc -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      app.kubernetes.io/managed-by: argocd
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"app.kubernetes.io/managed-by":"argocd","prometheus.io/port":"8443","prometheus.io/scheme":"https","prometheus.io/scrape":"true"},"labels":{"app":"hnc","app.kubernetes.io/instance":"hnc","control-plane":"controller-manager","type":"platform"},"name":"hnc-controller-manager-metrics-service","namespace":"hnc-system"},"spec":{"ports":[{"name":"https","port":8443,"targetPort":"https"}],"selector":{"app":"hnc","control-plane":"controller-manager","type":"platform"}}}
      prometheus.io/port: "8443"
      prometheus.io/scheme: https
      prometheus.io/scrape: "true"
    creationTimestamp: "2021-06-17T05:43:04Z"
    labels:
      app: hnc
      app.kubernetes.io/instance: hnc
      control-plane: controller-manager
      type: platform
    name: hnc-controller-manager-metrics-service
    namespace: hnc-system
    resourceVersion: "15816402"
    uid: b304ff85-f20c-44b1-a1bd-1f8baa4a6d9f
  spec:
    clusterIP: 172.20.235.195
    clusterIPs:
    - 172.20.235.195
    ports:
    - name: https
      port: 8443
      protocol: TCP
      targetPort: https
    selector:
      app: hnc
      control-plane: controller-manager
      type: platform
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      app.kubernetes.io/managed-by: argocd
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"app.kubernetes.io/managed-by":"argocd"},"labels":{"app":"hnc","app.kubernetes.io/instance":"hnc","type":"platform"},"name":"hnc-webhook-service","namespace":"hnc-system"},"spec":{"ports":[{"port":443,"targetPort":9443}],"selector":{"app":"hnc","control-plane":"controller-manager","type":"platform"}}}
    creationTimestamp: "2021-06-17T05:43:04Z"
    labels:
      app: hnc
      app.kubernetes.io/instance: hnc
      type: platform
    name: hnc-webhook-service
    namespace: hnc-system
    resourceVersion: "15816400"
    uid: 8ed4be69-10a9-465a-bed1-488bcbbb7aa5
  spec:
    clusterIP: 172.20.87.197
    clusterIPs:
    - 172.20.87.197
    ports:
    - port: 443
      protocol: TCP
      targetPort: 9443
    selector:
      app: hnc
      control-plane: controller-manager
      type: platform
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

@micnncim
Copy link

I ran into the same issue and have fixed it by increasing the resources as it didn't have enough capacity.

$ kubectl top po -n hnc-system
NAME                                      CPU(cores)   MEMORY(bytes)
hnc-controller-manager-75bfb4f6b9-mkpwp   101m         190Mi

@adrianludwin
Copy link
Contributor

Oh thanks @micnncim that's interesting and I hadn't considered that!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants