Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removes storage_driver option from /etc/docker/daemon.json #16235

Closed
wants to merge 3 commits into from

Conversation

x7upLime
Copy link
Contributor

@x7upLime x7upLime commented Apr 4, 2023

This setting is applied by minikube in both /etc/docker/daemon.json and as an override to the service file in
/etc/systemd/system/docker.service.d/10-machine.conf, during provision phase.

If two such directives are set, docker.service will not start

fixes #16231

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Apr 4, 2023
@k8s-ci-robot
Copy link
Contributor

Hi @x7upLime. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: x7upLime
Once this PR has been reviewed and has the lgtm label, please assign afbjorklund for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Apr 4, 2023
@x7upLime
Copy link
Contributor Author

x7upLime commented Apr 4, 2023

This was the workaround..
It should be safe to remove it from here, because we're already setting it inside the service file override in the cmdline.

I'm wondering about linux distros that are not using systemd..

@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@x7upLime
Copy link
Contributor Author

x7upLime commented Apr 9, 2023

The situation is as follows:

1st storage-driver

is set by minikube indiscriminately when configuring the docker cruntime()

     // minikube/pkg/minikube/cruntime/docker.go
     func (r *Docker) setCGroup(driver string) error {
     	fmt.Printf("configuring docker to use %q as cgroup driver...\n", driver)
	daemonConfig := fmt.Sprintf(`{
            "exec-opts": ["native.cgroupdriver=%s"],
            "log-driver": "json-file",
            "log-opts": {
            	"max-size": "100m"
            },
            "storage-driver": "overlay2"
            }
            `, driver)
2nd storage-driver

is set as an override for the systemd service file, by libmachine code, that minikube relias upon...
It happens only with ssh driver and none driver, because of this check:

// minikube/pkg/minikube/machine/machine.go
// fastDetectProvisioner provides a shortcut for provisioner detection
func fastDetectProvisioner(h *host.Host) (libprovision.Provisioner, error) {
	d := h.Driver.DriverName()
	switch {
	case driver.IsKIC(d):
		return provision.NewUbuntuProvisioner(h.Driver), nil
	case driver.BareMetal(d), driver.IsSSH(d):
		return libprovision.DetectProvisioner(h.Driver)
	default:
		return provision.NewBuildrootProvisioner(h.Driver), nil
	}
}

with every other driver we call minikube code for provisioning of the machine, with ssh/none driver, we default to the provisioner of libmachine.

@x7upLime
Copy link
Contributor Author

x7upLime commented Apr 9, 2023

minikube implements 2 libmachine-compatible provisioners:
UbuntuProvisioner -- for kic driver
BuildrootProvisioner -- for everything else
while libmachine implements a ton of provisioners (machine/libmachine/provision/*.go)

Our storage-driver gets set by the provisioner's method GenerateDockerOptions.
In minikube's codebase, UbuntuProvisioner and BuildRootProvivisioner don't set storage-driver
while inside libmachine's codebase, provisioners may set storage-driver (say *SystemdProvisioner)

@x7upLime
Copy link
Contributor Author

x7upLime commented Apr 9, 2023

solutions I can think of..

  1. export the driver we're using at runtime as a getter function, and use that value to conditionally change the config for /etc/docker/daemon.json: if driver.IsNone() || driver.IsSSH() {don't insert storage-driver option}. But: "adding a global variable and ensuring it gets valued at runtime?"

  2. Add a MachineDriver field in the docker cruntime struct, in order to change the config as described above.. This could mean modifying the signature for the function that generates cruntime config, and modifying its call parameters in a lot of places

// New returns an appropriately configured runtime
func New(c Config) (Manager, error) {
  sm := sysinit.New(c.Runner)

  switch c.Type {
  case "", "docker":
  	sp := c.Socket
  	cs := ""
  	// There is no more dockershim socket, in Kubernetes version 1.24 and beyond
  	if sp == "" && c.KubernetesVersion.GTE(semver.MustParse("1.24.0-alpha.0")) {
  		sp = ExternalDockerCRISocket
  		cs = "cri-docker"
  	}
  	return &Docker{
  		Socket:            sp,
  		Runner:            c.Runner,
  		NetworkPlugin:     c.NetworkPlugin,
  		ImageRepository:   c.ImageRepository,
  		KubernetesVersion: c.KubernetesVersion,
  		Init:              sm,
  		UseCRI:            (sp != ""), // !dockershim
  		CRIService:        cs,
  		MachineDriver: "???",
  	}, nil
...

But: "changing a struct (Config) and related initializer, that gets called across all the codebase?"

  1. Somehow "override" the GenerateDockerOptions method of whatever provisioner gets initialized by libmachine... But: "a method override just to change an option in a config file?"

@x7upLime
Copy link
Contributor Author

x7upLime commented Apr 9, 2023

or...
we could play along the libmachine's intended behavior of taking care of the storage-driver option, inside the provisioner logic, and not outside
Meaning we don't add unnecessary complexity, we rather adjust the other two provisioners inside the minikube codebase.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 11, 2023

with every other driver we call minikube code for provisioning of the machine, with ssh/none driver, we default to the provisioner of libmachine.

It would make more sense to have the none and ssh drivers use the minikube provisioner, instead of mixing/matching.

I'm not sure where this 10-machine.conf comes from, and we should probably not set it up in the first place ?

while libmachine implements a ton of provisioners (machine/libmachine/provision/*.go)

The minikube provisioner is currently broken (it doesn't provision docker), but it should be improved instead.

I think it only needs to support the ubuntu and the rhel systems, and not all the other system provisioners.

Copy link
Collaborator

@afbjorklund afbjorklund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the provisioner should be changed instead, and leave cruntime for now...

The /etc/docker/daemon.json comes from the old kubernetes installation docs

kubernetes/website@e7e5f0c

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

@afbjorklund
Copy link
Collaborator

@x7upLime
Copy link
Contributor Author

x7upLime commented Apr 11, 2023

I'm not sure where this 10-machine.conf comes from, and we should probably not set it up in the first place ?

If i remember correctly, that comes from the systemd provisioner of the docker-machine lib. That is the default behavior of docker-machine's provisioners: it is the provisioner that sets the storage-driver as a .service ExecStart= override.

I think the provisioner should be changed instead, and leave cruntime for now...

It would make more sense to have the none and ssh drivers use the minikube provisioner, instead of mixing/matching.

The minikube provisioner is currently broken (it doesn't provision docker), but it should be improved instead.

I think it only needs to support the ubuntu and the rhel systems, and not all the other system provisioners.

If I understand correctly, you're suggesting that in order to address this issue and the kind, we should first refactor the provisioner code for the none/generic driver (minikube side).
We should remove the dependency of docker-machine lib for generic/none, and support only the usecases where minikube none/ssh is used against an ubuntu/rhel system?

@x7upLime
Copy link
Contributor Author

I can see that inside the minikube codebase, we're using the miniProvisioner interface:

// generic interface for minikube provisioner
type miniProvisioner interface {
	String() string
	CompatibleWithHost() bool
	GenerateDockerOptions(int) (*provision.DockerOptions, error)
	Provision(swarmOptions swarm.Options, authOptions auth.Options, engineOptions engine.Options) error
	GetDriver() drivers.Driver
	GetAuthOptions() auth.Options
	SSHCommand(string) (string, error)
}

instead of the Provisioner interface of docker-machine:

type Provisioner interface {
	fmt.Stringer
	SSHCommander
	GenerateDockerOptions(dockerPort int) (*DockerOptions, error)
	GetDockerOptionsDir() string
	GetAuthOptions() auth.Options
	GetSwarmOptions() swarm.Options
	Package(name string, action pkgaction.PackageAction) error
	Hostname() (string, error)
	SetHostname(hostname string) error
	CompatibleWithHost() bool
	// Do the actual provisioning piece:
	//     1. Set the hostname on the instance.
	//     2. Install Docker if it is not present.
	//     3. Configure the daemon to accept connections over TLS.
	//     4. Copy the needed certificates to the server and local config dir.
	//     5. Configure / activate swarm if applicable.
	Provision(swarmOptions swarm.Options, authOptions auth.Options, engineOptions engine.Options) error
	Service(name string, action serviceaction.ServiceAction) error
	GetDriver() drivers.Driver
	SetOsReleaseInfo(info *OsRelease)
	GetOsReleaseInfo() (*OsRelease, error)
}

the former being compatible with the latter.. that's why we're able to use docker-machine provisioners.

@x7upLime
Copy link
Contributor Author

I can see that to adjust the provisioner for minikube, to make it really provision machines, with more than a container runtime, would be a great effort and a major rework.. which I would be more than happy to take a shot at 😜

But I'm wondering.. is this something that could be merged soon?
Seems like a long term idea, we could face may obstacles along the way.
Does it make sense to prepone something that complex to an issue like this?
There are a couple of small issues that once resolved, could improve the none/ssh experience a lot.

@medyagh @spowelljr what do you think about it?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 11, 2023

Fixing the minikube Provisioner would be a major effort, agreed.

And yes, I meant creating a new provisioner for none and generic

@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Apr 12, 2023
@x7upLime
Copy link
Contributor Author

taking into considerations all the opinions collected between zoom/slack/github.
All the traces of storage-driver have been wiped in the first commit, since as @afbjorklund pointed, its the default behavior to use overlay2.
If we wanted to enforce the overlay2, the best place is arguably the provisioner; second commit places storage-driver directive into the ExecStart= of the docker service, for the minikube-codebase provisioners.

@x7upLime
Copy link
Contributor Author

as @spowelljr pointed out, there may be calls to the provisioner between minikube stop/start.. and this change may break existing clusters.
checking backwards compatibility...

@x7upLime
Copy link
Contributor Author

x7upLime commented Apr 12, 2023

Cluster seems to continue running between minikube start from master and minikube start from fix-cr_docker_config

😄  minikube v1.30.1 on Ubuntu 22.10
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🏃  Updating the running docker "minikube" container ...
🤦  StartHost failed, but will try again: provision: ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err     : Process exited with status 1
output  : --- /lib/systemd/system/docker.service	2023-04-12 13:39:58.584369590 +0000
+++ /lib/systemd/system/docker.service.new	2023-04-12 13:40:23.016636132 +0000
@@ -25,7 +25,7 @@
 # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
 ExecStart=
-ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver overlay2 --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
 ExecReload=/bin/kill -s HUP $MAINPID
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

🏃  Updating the running docker "minikube" container ...
🐳  Preparing Kubernetes v1.26.3 on Docker 23.0.3 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
And the journalctl logs:
-- Logs begin at Wed 2023-04-12 13:39:57 UTC, end at Wed 2023-04-12 13:41:27 UTC. --
Apr 12 13:39:57 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.526771737Z" level=info msg="Starting up"
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527588104Z" level=info msg="[core] [Channel #1] Channel created" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527608260Z" level=info msg="[core] [Channel #1] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527636216Z" level=info msg="[core] [Channel #1] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527646599Z" level=info msg="[core] [Channel #1] Channel authority set to \"localhost\"" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527730407Z" level=info msg="[core] [Channel #1] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527772219Z" level=info msg="[core] [Channel #1] Channel switches to new LB policy \"pick_first\"" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527820647Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel created" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527863574Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527898021Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.527908037Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.528240705Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.528348427Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529577426Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529588249Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529601516Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529606885Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529634065Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529647550Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529660451Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529670836Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529680903Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529728667Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529764700Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.529779689Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.541437300Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.557226569Z" level=info msg="Loading containers: start."
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.607912178Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.738345985Z" level=info msg="Loading containers: done."
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.744811363Z" level=info msg="Docker daemon" commit=59118bf graphdriver=overlay2 version=23.0.3
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.744909483Z" level=info msg="Daemon has completed initialization"
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.773335856Z" level=info msg="[core] [Server #7] Server created" module=grpc
Apr 12 13:39:57 minikube systemd[1]: Started Docker Application Container Engine.
Apr 12 13:39:57 minikube dockerd[142]: time="2023-04-12T13:39:57.775778142Z" level=info msg="API listen on /run/docker.sock"
Apr 12 13:39:58 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Apr 12 13:39:58 minikube systemd[1]: Stopping Docker Application Container Engine...
Apr 12 13:39:58 minikube dockerd[142]: time="2023-04-12T13:39:58.952783499Z" level=info msg="Processing signal 'terminated'"
Apr 12 13:39:58 minikube dockerd[142]: time="2023-04-12T13:39:58.953161403Z" level=info msg="[core] [Channel #1] Channel Connectivity change to SHUTDOWN" module=grpc
Apr 12 13:39:58 minikube dockerd[142]: time="2023-04-12T13:39:58.953174323Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to SHUTDOWN" module=grpc
Apr 12 13:39:58 minikube dockerd[142]: time="2023-04-12T13:39:58.953181679Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel deleted" module=grpc
Apr 12 13:39:58 minikube dockerd[142]: time="2023-04-12T13:39:58.953184604Z" level=info msg="[core] [Channel #1] Channel deleted" module=grpc
Apr 12 13:39:58 minikube dockerd[142]: time="2023-04-12T13:39:58.953221330Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 12 13:39:58 minikube dockerd[142]: time="2023-04-12T13:39:58.953449684Z" level=info msg="Daemon shutdown complete"
Apr 12 13:39:58 minikube systemd[1]: docker.service: Succeeded.
Apr 12 13:39:58 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 12 13:39:58 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.979058382Z" level=info msg="Starting up"
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.979783626Z" level=info msg="[core] [Channel #1] Channel created" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.979794335Z" level=info msg="[core] [Channel #1] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.979807273Z" level=info msg="[core] [Channel #1] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.979812904Z" level=info msg="[core] [Channel #1] Channel authority set to \"localhost\"" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.979868936Z" level=info msg="[core] [Channel #1] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.979911266Z" level=info msg="[core] [Channel #1] Channel switches to new LB policy \"pick_first\"" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.979952821Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel created" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.979978496Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980016218Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980017137Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980213372Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980220788Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980676241Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980688517Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980703023Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980709155Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980723309Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980738268Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980753734Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980763969Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980773400Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.980775968Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.981248410Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
Apr 12 13:39:58 minikube dockerd[398]: time="2023-04-12T13:39:58.981295433Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
Apr 12 13:39:59 minikube dockerd[398]: time="2023-04-12T13:39:59.032912262Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Apr 12 13:39:59 minikube dockerd[398]: time="2023-04-12T13:39:59.035881813Z" level=info msg="Loading containers: start."
Apr 12 13:39:59 minikube dockerd[398]: time="2023-04-12T13:39:59.535080236Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 12 13:39:59 minikube dockerd[398]: time="2023-04-12T13:39:59.594048795Z" level=info msg="Loading containers: done."
Apr 12 13:39:59 minikube dockerd[398]: time="2023-04-12T13:39:59.603999307Z" level=info msg="Docker daemon" commit=59118bf graphdriver=overlay2 version=23.0.3
Apr 12 13:39:59 minikube dockerd[398]: time="2023-04-12T13:39:59.604033223Z" level=info msg="Daemon has completed initialization"
Apr 12 13:39:59 minikube dockerd[398]: time="2023-04-12T13:39:59.618714540Z" level=info msg="[core] [Server #7] Server created" module=grpc
Apr 12 13:39:59 minikube systemd[1]: Started Docker Application Container Engine.
Apr 12 13:39:59 minikube dockerd[398]: time="2023-04-12T13:39:59.621023820Z" level=info msg="API listen on [::]:2376"
Apr 12 13:39:59 minikube dockerd[398]: time="2023-04-12T13:39:59.624337838Z" level=info msg="API listen on /var/run/docker.sock"
Apr 12 13:40:00 minikube systemd[1]: Stopping Docker Application Container Engine...
Apr 12 13:40:00 minikube dockerd[398]: time="2023-04-12T13:40:00.382515244Z" level=info msg="Processing signal 'terminated'"
Apr 12 13:40:00 minikube dockerd[398]: time="2023-04-12T13:40:00.383212077Z" level=info msg="[core] [Channel #1] Channel Connectivity change to SHUTDOWN" module=grpc
Apr 12 13:40:00 minikube dockerd[398]: time="2023-04-12T13:40:00.383232427Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to SHUTDOWN" module=grpc
Apr 12 13:40:00 minikube dockerd[398]: time="2023-04-12T13:40:00.383245095Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel deleted" module=grpc
Apr 12 13:40:00 minikube dockerd[398]: time="2023-04-12T13:40:00.383250816Z" level=info msg="[core] [Channel #1] Channel deleted" module=grpc
Apr 12 13:40:00 minikube dockerd[398]: time="2023-04-12T13:40:00.383307904Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 12 13:40:00 minikube dockerd[398]: time="2023-04-12T13:40:00.383587674Z" level=info msg="Daemon shutdown complete"
Apr 12 13:40:00 minikube systemd[1]: docker.service: Succeeded.
Apr 12 13:40:00 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 12 13:40:00 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450142878Z" level=info msg="Starting up"
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450812067Z" level=info msg="[core] [Channel #1] Channel created" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450824046Z" level=info msg="[core] [Channel #1] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450838631Z" level=info msg="[core] [Channel #1] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450846871Z" level=info msg="[core] [Channel #1] Channel authority set to \"localhost\"" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450903051Z" level=info msg="[core] [Channel #1] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450935689Z" level=info msg="[core] [Channel #1] Channel switches to new LB policy \"pick_first\"" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450969306Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel created" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450984182Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.450995023Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.451006372Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.451216638Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.451239098Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452041764Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452055463Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452072733Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452083658Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452112769Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452136640Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452163205Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452187839Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452218216Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452221899Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452367296Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.452380368Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.458536119Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.466701636Z" level=info msg="Loading containers: start."
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.664517259Z" level=info msg="Processing signal 'terminated'"
Apr 12 13:40:00 minikube dockerd[646]: time="2023-04-12T13:40:00.991812944Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.086071166Z" level=info msg="Loading containers: done."
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.096181898Z" level=info msg="Docker daemon" commit=59118bf graphdriver=overlay2 version=23.0.3
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.096215460Z" level=info msg="Daemon has completed initialization"
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.109265221Z" level=info msg="[core] [Server #7] Server created" module=grpc
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.111399484Z" level=info msg="API listen on [::]:2376"
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.115013632Z" level=info msg="API listen on /var/run/docker.sock"
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.115654348Z" level=info msg="[core] [Channel #1] Channel Connectivity change to SHUTDOWN" module=grpc
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.115691662Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to SHUTDOWN" module=grpc
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.115704314Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel deleted" module=grpc
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.115710878Z" level=info msg="[core] [Channel #1] Channel deleted" module=grpc
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.115814849Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.116243410Z" level=info msg="Daemon shutdown complete"
Apr 12 13:40:01 minikube dockerd[646]: time="2023-04-12T13:40:01.116265138Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 12 13:40:01 minikube systemd[1]: docker.service: Succeeded.
Apr 12 13:40:01 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 12 13:40:01 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.163724687Z" level=info msg="Starting up"
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164498584Z" level=info msg="[core] [Channel #1] Channel created" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164514930Z" level=info msg="[core] [Channel #1] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164539086Z" level=info msg="[core] [Channel #1] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164549606Z" level=info msg="[core] [Channel #1] Channel authority set to \"localhost\"" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164628581Z" level=info msg="[core] [Channel #1] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164664321Z" level=info msg="[core] [Channel #1] Channel switches to new LB policy \"pick_first\"" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164699684Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel created" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164730193Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164752589Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164787238Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164936711Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.164950700Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165393975Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165400027Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165409148Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165417931Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165434186Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165454604Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165471607Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165482558Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165492654Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165522507Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165606059Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165618125Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.165823623Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.210244078Z" level=info msg="Loading containers: start."
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.685791573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.774058044Z" level=info msg="Loading containers: done."
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.781553390Z" level=info msg="Docker daemon" commit=59118bf graphdriver=overlay2 version=23.0.3
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.781587781Z" level=info msg="Daemon has completed initialization"
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.794725871Z" level=info msg="[core] [Server #7] Server created" module=grpc
Apr 12 13:40:01 minikube systemd[1]: Started Docker Application Container Engine.
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.797736818Z" level=info msg="API listen on [::]:2376"
Apr 12 13:40:01 minikube dockerd[856]: time="2023-04-12T13:40:01.800998409Z" level=info msg="API listen on /var/run/docker.sock"
Apr 12 13:40:23 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Apr 12 13:40:23 minikube systemd[1]: Stopping Docker Application Container Engine...
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.559804881Z" level=info msg="Processing signal 'terminated'"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.576835598Z" level=info msg="ignoring event" container=bf5d59b735c0c429d3bfe52540bd64ec41dfc84aa1fb6cd341ab773d26a0c8bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.576916668Z" level=info msg="ignoring event" container=4fef7b444494e40decd4354a1b3a8a0c397be2737cf8a4534a5a00dbd71d50b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.577617660Z" level=info msg="ignoring event" container=a320a906af37215044aa66cba61224abdcfe4f578e7047ad9369b13041146666 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.577856094Z" level=info msg="ignoring event" container=e847e5cdbbe3ec77c0cc7c7b25790dc32f394b12318e2b67e9513e5052c17ae2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.578299277Z" level=info msg="ignoring event" container=8b53de007e558210b2fb8c5bbaa391eccb0dc05c70875417ca2efa104018a3e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.578390269Z" level=info msg="ignoring event" container=210f6715387aa2d2b2ecf5989c630ced71af5c65dbd7206cf824c296eeffaa0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.578465601Z" level=info msg="ignoring event" container=03339f2c4cf88d39bc889c537cb269e210e9de35e9003e5904da00c156505bdb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.616326000Z" level=info msg="ignoring event" container=0a9d612b26502fe3c5097c959c9ac78203f0ae554c9bde0394afae4389e9e8b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.616356690Z" level=info msg="ignoring event" container=b455d21bd1e1247f35e557d5d0aa9d1517d9d2795608e4b05e4ec882b7587ccd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.616374126Z" level=info msg="ignoring event" container=5841c47238c84706c84bcf1e3cd1505e79331d38253a70b470520c587fbf9296 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.616385711Z" level=info msg="ignoring event" container=78c8f5db8371e59160371a5d67f680dae46f3855c3616ffb85980b48c53819ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:23 minikube dockerd[856]: time="2023-04-12T13:40:23.693942356Z" level=info msg="ignoring event" container=cc78af88925da87997c2deab125c5782f56d640095d55d7de4b1eb22843e6b6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:26 minikube dockerd[856]: time="2023-04-12T13:40:26.286570544Z" level=info msg="ignoring event" container=208dcd5d5d7bea40abcf9fcfbd639a6410c8fed92a330c8e577252efb0abde6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:27 minikube dockerd[856]: time="2023-04-12T13:40:27.265488975Z" level=info msg="ignoring event" container=5ae738782adc79487100b55a70046926b8e483048f426f7ccfa53ddb924043e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:29 minikube dockerd[856]: time="2023-04-12T13:40:29.126719525Z" level=info msg="ignoring event" container=f483c39ab5a768f92999959ecfdf0780cf93371ea39d4ba4b96cc6c5e49c86cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 13:40:29 minikube dockerd[856]: time="2023-04-12T13:40:29.214233200Z" level=info msg="[core] [Channel #1] Channel Connectivity change to SHUTDOWN" module=grpc

I can see that at least for a while, there are two storage-driver directives..
Apr 12 13:40:29 minikube dockerd[4216]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: overlay2)

But then I cannot see the storage-driver inside /etc/docker/daemon.json.
And the systemctl restart docker works.

How bad is that?

same behaviour with --driver=kvm
It seems that minikube start updates older configurations

@spowelljr
Copy link
Member

ok-to-build-iso

@minikube-bot
Copy link
Collaborator

Hi @x7upLime, we have updated your PR with the reference to newly built ISO. Pull the changes locally if you want to test with them or update your PR further.

@spowelljr
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Apr 20, 2023
@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Apr 20, 2023
@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot

This comment has been minimized.

x7upLime and others added 3 commits April 24, 2023 23:22
The code for the docker runtime (*Docker).setCGroup used to configure
the storage driver as well.. which could be seen as a wrong behaviour.

Also we're removing the storage-driver directive from the daemon.json
that are baked inside the iso..
However we want to see it.. it is not optimal to bake a config inside
the iso used by the vm driver; to build a new iso is painful
@x7upLime
Copy link
Contributor Author

/retest

@spowelljr
Copy link
Member

/retest-this-please

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 16235) |
+----------------+----------+---------------------+
| minikube start | 51.5s    | 52.7s               |
| enable ingress | 27.4s    | 25.7s               |
+----------------+----------+---------------------+

Times for minikube (PR 16235) start: 53.4s 53.6s 51.7s 52.6s 52.1s
Times for minikube start: 51.8s 49.0s 52.5s 51.9s 52.3s

Times for minikube ingress: 27.8s 25.8s 30.3s 24.7s 28.2s
Times for minikube (PR 16235) ingress: 28.2s 25.2s 25.3s 25.2s 24.8s

docker driver with docker runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 16235) |
+-------------------+----------+---------------------+
| minikube start    | 23.0s    | 24.1s               |
| ⚠️  enable ingress | 33.7s    | 39.9s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube ingress: 22.0s 21.0s 23.0s 21.0s 81.5s
Times for minikube (PR 16235) ingress: 21.0s 24.0s 21.5s 50.5s 82.5s

Times for minikube start: 21.6s 23.4s 22.2s 25.1s 22.8s
Times for minikube (PR 16235) start: 25.6s 22.8s 22.0s 24.8s 25.3s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 16235) |
+----------------+----------+---------------------+
| minikube start | 21.1s    | 21.5s               |
| enable ingress | 38.2s    | 35.5s               |
+----------------+----------+---------------------+

Times for minikube start: 20.5s 20.7s 23.5s 20.5s 20.4s
Times for minikube (PR 16235) start: 21.5s 21.0s 20.7s 23.7s 20.4s

Times for minikube ingress: 32.0s 47.6s 32.5s 31.5s 47.6s
Times for minikube (PR 16235) ingress: 49.0s 31.5s 33.0s 32.5s 31.5s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Cloud_Shell TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (gopogh) 0.00 (chart)
Docker_Cloud_Shell TestStartStop/group/cloud-shell/serial/Pause (gopogh) 0.00 (chart)
Docker_Cloud_Shell TestStartStop/group/cloud-shell/serial/SecondStart (gopogh) 0.00 (chart)
Docker_Cloud_Shell TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (gopogh) 0.00 (chart)
KVM_Linux TestKVMDriverInstallOrUpdate (gopogh) 0.00 (chart)
Docker_Linux_containerd TestMultiNode/serial/DeleteNode (gopogh) 2.29 (chart)
Docker_Linux_containerd TestMultiNode/serial/RestartKeepsNodes (gopogh) 4.57 (chart)
Hyperkit_macOS TestPause/serial/SecondStartNoReconfiguration (gopogh) 10.78 (chart)
KVM_Linux TestPause/serial/SecondStartNoReconfiguration (gopogh) 14.04 (chart)

To see the flake rates of all tests by environment, click here.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 3, 2023
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@f0rkz
Copy link

f0rkz commented Jun 29, 2023

Seeing the same issue in debian... Any progress on this fix?

@x7upLime x7upLime closed this Jan 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Conflict between docker config and docker service with ssh driver
7 participants