Skip to content

panic="object does not implement the Object interfaces" #3105

@dennispan

Description

@dennispan

Is there an existing issue for this?

  • I have searched the existing issues

Version

1.0.1

What happened?

Occasionally we see virtual kubelet pod gets into Error and then Crashloopbackoff status. In the pod log, we see this:

E0817 22:14:39.526381       1 runtime.go:258] "Observed a panic" panic="object does not implement the Object interfaces" panicGoValue="&errors.errorString{s:\"object does not implement the Object interfaces\"}" stacktrace=<
    goroutine 897 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic({0x29b7860, 0x3ea46c0}, {0x2147800, 0xc00059b420})
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/runtime/runtime.go:107 +0xbc
    k8s.io/apimachinery/pkg/util/runtime.handleCrash({0x29b7860, 0x3ea46c0}, {0x2147800, 0xc00059b420}, {0x3ea46c0, 0x0, 0x4409d8?})
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/runtime/runtime.go:82 +0x5a
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc003984000?})
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/runtime/runtime.go:59 +0x105
    panic({0x2147800?, 0xc00059b420?})
        /usr/local/go/src/runtime/panic.go:792 +0x132
    k8s.io/apimachinery/pkg/util/runtime.Must({0x298a600?, 0xc00059b420?})
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/runtime/runtime.go:258 +0x2c
    github.com/liqotech/liqo/pkg/virtualKubelet/reflection/generic.(*reflector).handlers-fm.(*reflector).handlers.func1({0x2538579, 0x7}, {0x229d480?, 0xc03105eba0?})
        /tmp/builder/pkg/virtualKubelet/reflection/generic/reflector.go:307 +0x8a
    github.com/liqotech/liqo/pkg/virtualKubelet/reflection/generic.(*reflector).handlers-fm.(*reflector).handlers.func4({0x229d480?, 0xc03105eba0?})
        /tmp/builder/pkg/virtualKubelet/reflection/generic/reflector.go:327 +0x33
    k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
        /go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/controller.go:260
    k8s.io/client-go/tools/cache.(*processorListener).run.func1()
        /go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/shared_informer.go:983 +0x122
    k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000680808?)
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:226 +0x33
    k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc006644f70, {0x298e720, 0xc00398a000}, 0x1, 0xc003988000)
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:227 +0xaf
    k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000e90f70, 0x3b9aca00, 0x0, 0x1, 0xc003988000)
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:204 +0x7f
    k8s.io/apimachinery/pkg/util/wait.Until(...)
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:161
    k8s.io/client-go/tools/cache.(*processorListener).run(0xc0034ab170)
        /go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/shared_informer.go:972 +0x5a
    k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:72 +0x4c
    created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start in goroutine 870
        /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:70 +0x73
 >
panic: object does not implement the Object interfaces [recovered]
    panic: object does not implement the Object interfaces

goroutine 897 [running]:
k8s.io/apimachinery/pkg/util/runtime.handleCrash({0x29b7860, 0x3ea46c0}, {0x2147800, 0xc00059b420}, {0x3ea46c0, 0x0, 0x4409d8?})
    /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/runtime/runtime.go:89 +0xe7
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc003984000?})
    /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/runtime/runtime.go:59 +0x105
panic({0x2147800?, 0xc00059b420?})
    /usr/local/go/src/runtime/panic.go:792 +0x132
k8s.io/apimachinery/pkg/util/runtime.Must({0x298a600?, 0xc00059b420?})
    /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/runtime/runtime.go:258 +0x2c
github.com/liqotech/liqo/pkg/virtualKubelet/reflection/generic.(*reflector).handlers-fm.(*reflector).handlers.func1({0x2538579, 0x7}, {0x229d480?, 0xc03105eba0?})
    /tmp/builder/pkg/virtualKubelet/reflection/generic/reflector.go:307 +0x8a
github.com/liqotech/liqo/pkg/virtualKubelet/reflection/generic.(*reflector).handlers-fm.(*reflector).handlers.func4({0x229d480?, 0xc03105eba0?})
    /tmp/builder/pkg/virtualKubelet/reflection/generic/reflector.go:327 +0x33
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
    /go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/controller.go:260
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
    /go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/shared_informer.go:983 +0x122
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000680808?)
    /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc07c543f70, {0x298e720, 0xc00398a000}, 0x1, 0xc003988000)
    /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000e90f70, 0x3b9aca00, 0x0, 0x1, 0xc003988000)
    /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
    /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:161
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0034ab170)
    /go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/shared_informer.go:972 +0x5a
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
    /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:72 +0x4c
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start in goroutine 870
    /go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:70 +0x73
stream closed EOF for liqo-tenant-provider-prod-trg-gpu/vk-liqo-prod-trg-gpu-b54c597d-p8x6n (virtual-kubelet)

Relevant log output

How can we reproduce the issue?

  1. Have close to 500 pods being offloaded concurrently.
  2. Tune the network between consumer and provider to be congested

We only see this when the above two conditions happen

Provider or distribution

RKE2 for provider. EKS for consumer

CNI version

No response

Kernel Version

No response

Kubernetes Version

1.32 for provider. 1.33 for consumer

Code of Conduct

  • I agree to follow this project's Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugReport a bug encountered while operating Liqo

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions