-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VM: Add support for AppArmor #8299
Comments
I don't believe the buildroot VM we have supports AppArmor at this time. |
Evidentally there is buildroot support for this: http://lists.busybox.net/pipermail/buildroot/2018-May/222316.html Help wanted! |
From what I can see, it is not enabled by default in the kernel:
So it is something that needs explicitly to be enabled first:
|
Ubuntu 20.04 has this kernel config (for 5.4):
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
hello @tstromberg, is AppArmor supported now for Minikube ?, because am facing the same issue (am using Minikube v1.26.1 over Mac machine) |
Any update on this? |
Hi folks, could we get an update on this. |
Steps to reproduce the issue:
Full output of failed command:
Expected output:
According to Kubernetes AppArmor documentation
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
c550bce254f39 67da37a9a360e 2 minutes ago Running coredns 0 af5d489a5e769
217ca7f371ac9 67da37a9a360e 2 minutes ago Running coredns 0 022df310a0436
38b7a53bd2134 4689081edb103 2 minutes ago Running storage-provisioner 0 594afb1af8aff
a71ea7e9e73d7 0d40868643c69 2 minutes ago Running kube-proxy 0 909ebcceb8c3e
1d8f5410958af ace0a8c17ba90 2 minutes ago Running kube-controller-manager 0 4193838e781ca
0bce83925598a a3099161e1375 2 minutes ago Running kube-scheduler 0 f387955437087
722be1b0feefd 303ce5db0e90d 2 minutes ago Running etcd 0 751ac42fda790
801b467c1893c 6ed75ad404bdd 2 minutes ago Running kube-apiserver 0 827dbbf4ca612
==> coredns [217ca7f371ac] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
==> coredns [c550bce254f3] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=63ab801ac27e5742ae442ce36dff7877dcccb278
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_05_28T09_45_14_0700
minikube.k8s.io/version=v1.10.1
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 28 May 2020 14:45:11 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Thu, 28 May 2020 14:48:04 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Thu, 28 May 2020 14:45:15 +0000 Thu, 28 May 2020 14:45:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 28 May 2020 14:45:15 +0000 Thu, 28 May 2020 14:45:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 28 May 2020 14:45:15 +0000 Thu, 28 May 2020 14:45:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 28 May 2020 14:45:15 +0000 Thu, 28 May 2020 14:45:15 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.99.145
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3936856Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3936856Ki
pods: 110
System Info:
Machine ID: 85649432560d463daddd54e57dce29f2
System UUID: 94cab39c-573b-4eee-9b0b-02de2605048f
Boot ID: 5f8360a9-3f49-4096-b6c0-2b17ba4579e6
Kernel Version: 4.19.107
OS Image: Buildroot 2019.02.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.8
Kubelet Version: v1.18.2
Kube-Proxy Version: v1.18.2
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
kube-system coredns-66bff467f8-bwdfl 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2m45s
kube-system coredns-66bff467f8-pz72p 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2m45s
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m52s
kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 2m52s
kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2m52s
kube-system kube-proxy-7jc89 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m45s
kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2m52s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m51s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 750m (37%) 0 (0%)
memory 140Mi (3%) 340Mi (8%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
Normal Starting 2m53s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 2m53s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m53s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m53s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m52s kubelet, minikube Updated Node Allocatable limit across pods
Normal NodeReady 2m52s kubelet, minikube Node minikube status is now: NodeReady
Normal Starting 2m44s kube-proxy, minikube Starting kube-proxy.
==> dmesg <==
[ +0.000097] 00:00:00.001957 main OS Product: Linux
[ +0.000034] 00:00:00.001995 main OS Release: 4.19.107
[ +0.000033] 00:00:00.002028 main OS Version: #1 SMP Mon May 11 14:51:04 PDT 2020
[ +0.000079] 00:00:00.002061 main Executable: /usr/sbin/VBoxService
00:00:00.002062 main Process ID: 2095
00:00:00.002062 main Package type: LINUX_64BITS_GENERIC
[ +0.000064] 00:00:00.002142 main 5.2.32 r132073 started. Verbose level = 0
[ +0.398546] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +4.629231] hpet1: lost 287 rtc interrupts
[ +5.004032] hpet1: lost 318 rtc interrupts
[ +3.870764] systemd-fstab-generator[2352]: Ignoring "noauto" for root device
[ +0.077221] systemd-fstab-generator[2362]: Ignoring "noauto" for root device
[ +6.052907] hpet_rtc_timer_reinit: 67 callbacks suppressed
[ +0.000001] hpet1: lost 318 rtc interrupts
[ +4.175761] systemd-fstab-generator[2562]: Ignoring "noauto" for root device
[ +0.825901] hpet1: lost 318 rtc interrupts
[ +0.709948] systemd-fstab-generator[2718]: Ignoring "noauto" for root device
[ +0.467709] systemd-fstab-generator[2789]: Ignoring "noauto" for root device
[ +1.028521] systemd-fstab-generator[2986]: Ignoring "noauto" for root device
[May28 14:45] kauditd_printk_skb: 108 callbacks suppressed
[ +6.306956] hpet_rtc_timer_reinit: 33 callbacks suppressed
[ +0.000001] hpet1: lost 318 rtc interrupts
[ +1.014583] systemd-fstab-generator[4048]: Ignoring "noauto" for root device
[ +3.988364] hpet1: lost 318 rtc interrupts
[ +5.003520] hpet1: lost 319 rtc interrupts
[ +5.001795] hpet_rtc_timer_reinit: 45 callbacks suppressed
[ +0.000011] hpet1: lost 318 rtc interrupts
[ +5.000880] hpet_rtc_timer_reinit: 3 callbacks suppressed
[ +0.000010] hpet1: lost 318 rtc interrupts
[ +5.001135] hpet1: lost 318 rtc interrupts
[ +5.001161] hpet1: lost 318 rtc interrupts
[ +5.001878] hpet1: lost 318 rtc interrupts
[ +5.000240] hpet1: lost 318 rtc interrupts
[ +5.000987] hpet1: lost 319 rtc interrupts
[May28 14:46] hpet1: lost 318 rtc interrupts
[ +5.001088] hpet1: lost 318 rtc interrupts
[ +5.001489] hpet1: lost 318 rtc interrupts
[ +5.000912] hpet1: lost 319 rtc interrupts
[ +5.001249] hpet1: lost 318 rtc interrupts
[ +5.001036] hpet1: lost 318 rtc interrupts
[ +5.000709] hpet1: lost 318 rtc interrupts
[ +4.636311] NFSD: Unable to end grace period: -110
[ +0.365010] hpet1: lost 318 rtc interrupts
[ +5.001314] hpet1: lost 318 rtc interrupts
[ +5.000586] hpet1: lost 318 rtc interrupts
[ +5.001292] hpet1: lost 318 rtc interrupts
[ +5.001422] hpet1: lost 318 rtc interrupts
[May28 14:47] hpet1: lost 318 rtc interrupts
[ +5.002018] hpet1: lost 318 rtc interrupts
[ +5.001686] hpet1: lost 318 rtc interrupts
[ +5.001321] hpet1: lost 318 rtc interrupts
[ +5.001461] hpet1: lost 319 rtc interrupts
[ +5.001569] hpet1: lost 318 rtc interrupts
[ +5.000977] hpet1: lost 318 rtc interrupts
[ +5.001792] hpet1: lost 318 rtc interrupts
[ +5.002787] hpet1: lost 318 rtc interrupts
[ +5.000651] hpet1: lost 318 rtc interrupts
[ +5.001577] hpet1: lost 318 rtc interrupts
[ +5.001237] hpet1: lost 318 rtc interrupts
[May28 14:48] hpet1: lost 318 rtc interrupts
==> etcd [722be1b0feef] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-05-28 14:45:08.480952 I | etcdmain: etcd Version: 3.4.3
2020-05-28 14:45:08.481086 I | etcdmain: Git SHA: 3cf2f69b5
2020-05-28 14:45:08.481113 I | etcdmain: Go Version: go1.12.12
2020-05-28 14:45:08.481123 I | etcdmain: Go OS/Arch: linux/amd64
2020-05-28 14:45:08.481198 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-05-28 14:45:08.481357 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-05-28 14:45:08.481816 I | embed: name = minikube
2020-05-28 14:45:08.481846 I | embed: data dir = /var/lib/minikube/etcd
2020-05-28 14:45:08.481913 I | embed: member dir = /var/lib/minikube/etcd/member
2020-05-28 14:45:08.481941 I | embed: heartbeat = 100ms
2020-05-28 14:45:08.481951 I | embed: election = 1000ms
2020-05-28 14:45:08.481993 I | embed: snapshot count = 10000
2020-05-28 14:45:08.482069 I | embed: advertise client URLs = https://192.168.99.145:2379
2020-05-28 14:45:08.485845 I | etcdserver: starting member 6c11f6602955c704 in cluster aad912dd043c203
raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 switched to configuration voters=()
raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 became follower at term 0
raft2020/05/28 14:45:08 INFO: newRaft 6c11f6602955c704 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 became follower at term 1
raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 switched to configuration voters=(7787276123571078916)
2020-05-28 14:45:08.775951 W | auth: simple token is not cryptographically signed
2020-05-28 14:45:08.777536 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-05-28 14:45:08.780185 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-05-28 14:45:08.780385 I | embed: listening for metrics on http://127.0.0.1:2381
2020-05-28 14:45:08.780524 I | embed: listening for peers on 192.168.99.145:2380
2020-05-28 14:45:08.780753 I | etcdserver: 6c11f6602955c704 as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 switched to configuration voters=(7787276123571078916)
2020-05-28 14:45:08.781089 I | etcdserver/membership: added member 6c11f6602955c704 [https://192.168.99.145:2380] to cluster aad912dd043c203
raft2020/05/28 14:45:09 INFO: 6c11f6602955c704 is starting a new election at term 1
raft2020/05/28 14:45:09 INFO: 6c11f6602955c704 became candidate at term 2
raft2020/05/28 14:45:09 INFO: 6c11f6602955c704 received MsgVoteResp from 6c11f6602955c704 at term 2
raft2020/05/28 14:45:09 INFO: 6c11f6602955c704 became leader at term 2
raft2020/05/28 14:45:09 INFO: raft.node: 6c11f6602955c704 elected leader 6c11f6602955c704 at term 2
2020-05-28 14:45:09.187395 I | etcdserver: setting up the initial cluster version to 3.4
2020-05-28 14:45:09.187933 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.99.145:2379]} to cluster aad912dd043c203
2020-05-28 14:45:09.188231 I | embed: ready to serve client requests
2020-05-28 14:45:09.189529 I | embed: serving client requests on 192.168.99.145:2379
2020-05-28 14:45:09.189636 I | embed: ready to serve client requests
2020-05-28 14:45:09.190404 N | etcdserver/membership: set the initial cluster version to 3.4
2020-05-28 14:45:09.190548 I | etcdserver/api: enabled capabilities for version 3.4
2020-05-28 14:45:09.190649 I | embed: serving client requests on 127.0.0.1:2379
==> kernel <==
14:48:07 up 3 min, 0 users, load average: 0.33, 0.64, 0.31
Linux minikube 4.19.107 #1 SMP Mon May 11 14:51:04 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.10"
==> kube-apiserver [801b467c1893] <==
W0528 14:45:10.100263 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0528 14:45:10.109552 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0528 14:45:10.123342 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0528 14:45:10.125958 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0528 14:45:10.136943 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0528 14:45:10.151428 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0528 14:45:10.151451 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0528 14:45:10.165680 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0528 14:45:10.165702 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0528 14:45:10.170020 1 client.go:361] parsed scheme: "endpoint"
I0528 14:45:10.170064 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0528 14:45:10.190485 1 client.go:361] parsed scheme: "endpoint"
I0528 14:45:10.190638 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0528 14:45:11.555568 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0528 14:45:11.555612 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0528 14:45:11.555850 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0528 14:45:11.556402 1 secure_serving.go:178] Serving securely on [::]:8443
I0528 14:45:11.556453 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0528 14:45:11.557158 1 crd_finalizer.go:266] Starting CRDFinalizer
I0528 14:45:11.557467 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0528 14:45:11.557481 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0528 14:45:11.557492 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0528 14:45:11.557495 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0528 14:45:11.557872 1 autoregister_controller.go:141] Starting autoregister controller
I0528 14:45:11.557887 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0528 14:45:11.557944 1 controller.go:86] Starting OpenAPI controller
I0528 14:45:11.558017 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0528 14:45:11.558030 1 naming_controller.go:291] Starting NamingConditionController
I0528 14:45:11.558125 1 establishing_controller.go:76] Starting EstablishingController
I0528 14:45:11.558144 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0528 14:45:11.558151 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0528 14:45:11.558165 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0528 14:45:11.558247 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0528 14:45:11.559217 1 available_controller.go:387] Starting AvailableConditionController
I0528 14:45:11.559300 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0528 14:45:11.559342 1 controller.go:81] Starting OpenAPI AggregationController
I0528 14:45:11.573400 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0528 14:45:11.573416 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
E0528 14:45:11.606291 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.145, ResourceVersion: 0, AdditionalErrorMsg:
I0528 14:45:11.657653 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0528 14:45:11.657799 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0528 14:45:11.658484 1 cache.go:39] Caches are synced for autoregister controller
I0528 14:45:11.660541 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0528 14:45:11.673720 1 shared_informer.go:230] Caches are synced for crd-autoregister
I0528 14:45:12.556016 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0528 14:45:12.556094 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0528 14:45:12.564034 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0528 14:45:12.569047 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0528 14:45:12.569108 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0528 14:45:12.806469 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0528 14:45:12.837906 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0528 14:45:13.028307 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.145]
I0528 14:45:13.029134 1 controller.go:606] quota admission added evaluator for: endpoints
I0528 14:45:13.033178 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0528 14:45:14.316364 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0528 14:45:14.336921 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0528 14:45:14.542488 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0528 14:45:14.815088 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0528 14:45:22.435409 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0528 14:45:22.830895 1 controller.go:606] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager [1d8f5410958a] <==
I0528 14:45:20.781579 1 gc_controller.go:89] Starting GC controller
I0528 14:45:20.781586 1 shared_informer.go:223] Waiting for caches to sync for GC
I0528 14:45:20.931034 1 controllermanager.go:533] Started "csrcleaner"
I0528 14:45:20.931246 1 cleaner.go:82] Starting CSR cleaner controller
I0528 14:45:21.181303 1 controllermanager.go:533] Started "persistentvolume-expander"
I0528 14:45:21.181411 1 expand_controller.go:319] Starting expand controller
I0528 14:45:21.181418 1 shared_informer.go:223] Waiting for caches to sync for expand
I0528 14:45:21.431193 1 controllermanager.go:533] Started "endpointslice"
I0528 14:45:21.431282 1 endpointslice_controller.go:213] Starting endpoint slice controller
I0528 14:45:21.431326 1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice
I0528 14:45:22.337151 1 controllermanager.go:533] Started "garbagecollector"
I0528 14:45:22.337198 1 garbagecollector.go:133] Starting garbage collector controller
I0528 14:45:22.338723 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0528 14:45:22.338998 1 graph_builder.go:282] GraphBuilder running
I0528 14:45:22.357662 1 controllermanager.go:533] Started "cronjob"
I0528 14:45:22.358159 1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0528 14:45:22.358198 1 cronjob_controller.go:97] Starting CronJob Manager
W0528 14:45:22.389025 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0528 14:45:22.392378 1 shared_informer.go:230] Caches are synced for TTL
I0528 14:45:22.412767 1 shared_informer.go:230] Caches are synced for ReplicationController
I0528 14:45:22.431325 1 shared_informer.go:230] Caches are synced for daemon sets
I0528 14:45:22.431682 1 shared_informer.go:230] Caches are synced for endpoint_slice
I0528 14:45:22.432413 1 shared_informer.go:230] Caches are synced for job
I0528 14:45:22.432421 1 shared_informer.go:230] Caches are synced for PV protection
I0528 14:45:22.432428 1 shared_informer.go:230] Caches are synced for persistent volume
I0528 14:45:22.432572 1 shared_informer.go:230] Caches are synced for endpoint
I0528 14:45:22.442417 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"728d1a8e-da4a-4aa8-a2e4-203b88332e2a", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-7jc89
E0528 14:45:22.465952 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"728d1a8e-da4a-4aa8-a2e4-203b88332e2a", ResourceVersion:"184", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726273914, loc:(*time.Location)(0x6d07200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0017076c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0017076e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001707700), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00175c200), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001707720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001707740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001707780)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0016e8f00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001758708), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00059e230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000101d40)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001758758)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0528 14:45:22.466273 1 shared_informer.go:230] Caches are synced for taint
I0528 14:45:22.466316 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
W0528 14:45:22.466519 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0528 14:45:22.467294 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal.
I0528 14:45:22.466882 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0528 14:45:22.467045 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"cc188638-3cc4-4eab-bd18-fc055372ed8b", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0528 14:45:22.476025 1 shared_informer.go:230] Caches are synced for certificate-csrapproving
I0528 14:45:22.481680 1 shared_informer.go:230] Caches are synced for expand
I0528 14:45:22.481880 1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0528 14:45:22.482023 1 shared_informer.go:230] Caches are synced for PVC protection
I0528 14:45:22.482457 1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0528 14:45:22.482499 1 shared_informer.go:230] Caches are synced for GC
I0528 14:45:22.483210 1 shared_informer.go:230] Caches are synced for ReplicaSet
I0528 14:45:22.531623 1 shared_informer.go:230] Caches are synced for HPA
I0528 14:45:22.683229 1 shared_informer.go:230] Caches are synced for attach detach
I0528 14:45:22.732439 1 shared_informer.go:230] Caches are synced for stateful set
I0528 14:45:22.827733 1 shared_informer.go:230] Caches are synced for deployment
I0528 14:45:22.834456 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9afe7a9f-052b-4e24-9cbe-fcf79029e203", APIVersion:"apps/v1", ResourceVersion:"179", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
I0528 14:45:22.852469 1 shared_informer.go:230] Caches are synced for disruption
I0528 14:45:22.852482 1 disruption.go:339] Sending events to api server.
I0528 14:45:22.859754 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8d300b5b-6459-4d43-9603-c8ce8af62bdd", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-bwdfl
I0528 14:45:22.869653 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8d300b5b-6459-4d43-9603-c8ce8af62bdd", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-pz72p
I0528 14:45:22.872724 1 shared_informer.go:230] Caches are synced for namespace
I0528 14:45:22.882532 1 shared_informer.go:230] Caches are synced for service account
I0528 14:45:22.933157 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
E0528 14:45:23.002420 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0528 14:45:23.040625 1 shared_informer.go:230] Caches are synced for garbage collector
I0528 14:45:23.040650 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0528 14:45:23.044298 1 shared_informer.go:230] Caches are synced for resource quota
I0528 14:45:23.058483 1 shared_informer.go:230] Caches are synced for resource quota
I0528 14:45:23.832986 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0528 14:45:23.833011 1 shared_informer.go:230] Caches are synced for garbage collector
==> kube-proxy [a71ea7e9e73d] <==
W0528 14:45:23.371432 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0528 14:45:23.384344 1 node.go:136] Successfully retrieved node IP: 192.168.99.145
I0528 14:45:23.384363 1 server_others.go:186] Using iptables Proxier.
W0528 14:45:23.384369 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0528 14:45:23.384371 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0528 14:45:23.384673 1 server.go:583] Version: v1.18.2
I0528 14:45:23.384889 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0528 14:45:23.384911 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0528 14:45:23.385131 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0528 14:45:23.387832 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0528 14:45:23.387869 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0528 14:45:23.388421 1 config.go:133] Starting endpoints config controller
I0528 14:45:23.388432 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0528 14:45:23.388454 1 config.go:315] Starting service config controller
I0528 14:45:23.388458 1 shared_informer.go:223] Waiting for caches to sync for service config
I0528 14:45:23.488615 1 shared_informer.go:230] Caches are synced for endpoints config
I0528 14:45:23.488653 1 shared_informer.go:230] Caches are synced for service config
==> kube-scheduler [0bce83925598] <==
I0528 14:45:08.099897 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0528 14:45:08.100074 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0528 14:45:08.514964 1 serving.go:313] Generated self-signed cert in-memory
W0528 14:45:11.597642 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0528 14:45:11.597735 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0528 14:45:11.597835 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0528 14:45:11.598034 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0528 14:45:11.625234 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0528 14:45:11.625280 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0528 14:45:11.626208 1 authorization.go:47] Authorization is disabled
W0528 14:45:11.626330 1 authentication.go:40] Authentication is disabled
I0528 14:45:11.626429 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0528 14:45:11.637630 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0528 14:45:11.638050 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0528 14:45:11.637983 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0528 14:45:11.637999 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0528 14:45:11.641771 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0528 14:45:11.642143 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0528 14:45:11.642274 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0528 14:45:11.642418 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0528 14:45:11.642063 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0528 14:45:11.642105 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0528 14:45:11.642826 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0528 14:45:11.643166 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0528 14:45:11.643530 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0528 14:45:11.643765 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0528 14:45:11.644048 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0528 14:45:11.645375 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0528 14:45:11.646157 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0528 14:45:11.647025 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0528 14:45:11.648195 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0528 14:45:11.649328 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0528 14:45:11.650438 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0528 14:45:11.652029 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I0528 14:45:14.638578 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0528 14:45:14.839200 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0528 14:45:14.848095 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
-- Logs begin at Thu 2020-05-28 14:44:33 UTC, end at Thu 2020-05-28 14:48:07 UTC. --
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.788144 4056 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.8, apiVersion: 1.40.0
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.788495 4056 server.go:1125] Started kubelet
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.790457 4056 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.793459 4056 server.go:145] Starting to listen on 0.0.0.0:10250
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.794279 4056 server.go:393] Adding debug handlers to kubelet server.
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.796143 4056 volume_manager.go:265] Starting Kubelet Volume Manager
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.806690 4056 desired_state_of_world_populator.go:139] Desired state populator starts to run
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.891316 4056 status_manager.go:158] Starting to sync pod status with apiserver
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.891348 4056 kubelet.go:1821] Starting kubelet main sync loop.
May 28 14:45:14 minikube kubelet[4056]: E0528 14:45:14.891382 4056 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.897407 4056 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.948375 4056 kubelet_node_status.go:70] Attempting to register node minikube
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.962113 4056 kubelet_node_status.go:112] Node minikube was previously registered
May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.962236 4056 kubelet_node_status.go:73] Successfully registered node minikube
May 28 14:45:14 minikube kubelet[4056]: E0528 14:45:14.993182 4056 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084438 4056 cpu_manager.go:184] [cpumanager] starting with none policy
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084456 4056 cpu_manager.go:185] [cpumanager] reconciling every 10s
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084467 4056 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084596 4056 state_mem.go:88] [cpumanager] updated default cpuset: ""
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084611 4056 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084618 4056 policy_none.go:43] [cpumanager] none policy: Start
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.101550 4056 plugin_manager.go:114] Starting Kubelet Plugin Manager
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.193573 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.195128 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.200430 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.201798 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237806 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237844 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/257ccc1ffa508018717e8c29c822c1d2-kubeconfig") pod "kube-scheduler-minikube" (UID: "257ccc1ffa508018717e8c29c822c1d2")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237866 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d346fc98293de96e68324a214e3ef34a-etcd-certs") pod "etcd-minikube" (UID: "d346fc98293de96e68324a214e3ef34a")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237879 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a12ce7d47900c45535fca8cb6c10d153-ca-certs") pod "kube-apiserver-minikube" (UID: "a12ce7d47900c45535fca8cb6c10d153")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237891 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a12ce7d47900c45535fca8cb6c10d153-k8s-certs") pod "kube-apiserver-minikube" (UID: "a12ce7d47900c45535fca8cb6c10d153")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237942 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-ca-certs") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237959 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-k8s-certs") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237972 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d346fc98293de96e68324a214e3ef34a-etcd-data") pod "etcd-minikube" (UID: "d346fc98293de96e68324a214e3ef34a")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237987 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/a12ce7d47900c45535fca8cb6c10d153-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "a12ce7d47900c45535fca8cb6c10d153")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.238001 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.238014 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-kubeconfig") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b")
May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.238020 4056 reconciler.go:157] Reconciler: start to sync state
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.449035 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 28 14:45:22 minikube kubelet[4056]: E0528 14:45:22.460449 4056 reflector.go:178] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
May 28 14:45:22 minikube kubelet[4056]: E0528 14:45:22.460641 4056 reflector.go:178] object-"kube-system"/"kube-proxy-token-sgsdx": Failed to list *v1.Secret: secrets "kube-proxy-token-sgsdx" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.504866 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.576726 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/000a7c45-4185-4b96-867c-af02473d00f7-kube-proxy") pod "kube-proxy-7jc89" (UID: "000a7c45-4185-4b96-867c-af02473d00f7")
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.576886 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/000a7c45-4185-4b96-867c-af02473d00f7-lib-modules") pod "kube-proxy-7jc89" (UID: "000a7c45-4185-4b96-867c-af02473d00f7")
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.576980 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-sxxhm" (UniqueName: "kubernetes.io/secret/8973aa1e-049c-482a-9bd2-280e3329b562-storage-provisioner-token-sxxhm") pod "storage-provisioner" (UID: "8973aa1e-049c-482a-9bd2-280e3329b562")
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.577067 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/000a7c45-4185-4b96-867c-af02473d00f7-xtables-lock") pod "kube-proxy-7jc89" (UID: "000a7c45-4185-4b96-867c-af02473d00f7")
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.577157 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-sgsdx" (UniqueName: "kubernetes.io/secret/000a7c45-4185-4b96-867c-af02473d00f7-kube-proxy-token-sgsdx") pod "kube-proxy-7jc89" (UID: "000a7c45-4185-4b96-867c-af02473d00f7")
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.577243 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/8973aa1e-049c-482a-9bd2-280e3329b562-tmp") pod "storage-provisioner" (UID: "8973aa1e-049c-482a-9bd2-280e3329b562")
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.875202 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.923673 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.982401 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-fmpvq" (UniqueName: "kubernetes.io/secret/7f4e7d00-6cf9-4872-bef2-ecbdd74e7a28-coredns-token-fmpvq") pod "coredns-66bff467f8-pz72p" (UID: "7f4e7d00-6cf9-4872-bef2-ecbdd74e7a28")
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.982435 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be3556a5-4210-488f-b4a0-17f5a66683ca-config-volume") pod "coredns-66bff467f8-bwdfl" (UID: "be3556a5-4210-488f-b4a0-17f5a66683ca")
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.982452 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-fmpvq" (UniqueName: "kubernetes.io/secret/be3556a5-4210-488f-b4a0-17f5a66683ca-coredns-token-fmpvq") pod "coredns-66bff467f8-bwdfl" (UID: "be3556a5-4210-488f-b4a0-17f5a66683ca")
May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.982534 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7f4e7d00-6cf9-4872-bef2-ecbdd74e7a28-config-volume") pod "coredns-66bff467f8-pz72p" (UID: "7f4e7d00-6cf9-4872-bef2-ecbdd74e7a28")
May 28 14:45:23 minikube kubelet[4056]: W0528 14:45:23.150756 4056 pod_container_deletor.go:77] Container "594afb1af8aff4395419b1d6d54571289d9ee81d716bc085d0675a6bd433f99c" not found in pod's containers
May 28 14:45:23 minikube kubelet[4056]: W0528 14:45:23.277735 4056 pod_container_deletor.go:77] Container "909ebcceb8c3e2c3bdcfc7a102d8022b679dca10f3bef736a72e1b9c2574d8c5" not found in pod's containers
May 28 14:45:23 minikube kubelet[4056]: W0528 14:45:23.686830 4056 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-pz72p through plugin: invalid network status for
May 28 14:45:23 minikube kubelet[4056]: W0528 14:45:23.767772 4056 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-bwdfl through plugin: invalid network status for
May 28 14:45:24 minikube kubelet[4056]: W0528 14:45:24.285683 4056 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-pz72p through plugin: invalid network status for
May 28 14:45:24 minikube kubelet[4056]: W0528 14:45:24.314613 4056 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-bwdfl through plugin: invalid network status for
==> storage-provisioner [38b7a53bd213] <==
The text was updated successfully, but these errors were encountered: