Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube takes ~2 minutes to start even if the cluster was provisioned already #4847

Closed
blueelvis opened this issue Jul 23, 2019 · 4 comments
Assignees
Labels
area/performance Performance related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@blueelvis
Copy link
Contributor

As of now, even if the cluster has been provisioned already and running, it takes roughly 2.5 minutes to start it back up (At least on Windows) and it seems to be doing most of the housekeeping stuff which it does while creating the cluster again.

Not sure if it can be improved but it should not take this much time. What do you think?

Following are the logs -

PS C:\utilities> .\minikube-windows-amd64.exe start --alsologtostderr --v=8
I0723 14:17:31.193497   25204 notify.go:124] Checking for updates...
* minikube v1.2.0 on windows (amd64)
I0723 14:17:31.513304   25204 downloader.go:60] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v1.2.0.iso
I0723 14:17:31.515305   25204 start.go:868] Saving config:
{
    "MachineConfig": {
        "KeepContext": false,
        "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v1.2.0.iso",
        "Memory": 2000,
        "CPUs": 2,
        "DiskSize": 20000,
        "VMDriver": "virtualbox",
        "ContainerRuntime": "docker",
        "HyperkitVpnKitSock": "",
        "HyperkitVSockPorts": [],
        "DockerEnv": null,
        "InsecureRegistry": null,
        "RegistryMirror": null,
        "HostOnlyCIDR": "192.168.99.1/24",
        "HypervVirtualSwitch": "",
        "KVMNetwork": "default",
        "KVMQemuURI": "qemu:///system",
        "KVMGPU": false,
        "KVMHidden": false,
        "DockerOpt": null,
        "DisableDriverMounts": false,
        "NFSShare": [],
        "NFSSharesRoot": "/nfsshares",
        "UUID": "",
        "NoVTXCheck": false,
        "DNSProxy": false,
        "HostDNSResolver": true
    },
    "KubernetesConfig": {
        "KubernetesVersion": "v1.15.0",
        "NodeIP": "",
        "NodePort": 8443,
        "NodeName": "minikube",
        "APIServerName": "minikubeCA",
        "APIServerNames": null,
        "APIServerIPs": null,
        "DNSDomain": "cluster.local",
        "ContainerRuntime": "docker",
        "CRISocket": "",
        "NetworkPlugin": "",
        "FeatureGates": "",
        "ServiceCIDR": "10.96.0.0/12",
        "ImageRepository": "",
        "ExtraOptions": null,
        "ShouldLoadCachedImages": true,
        "EnableDefaultCNI": false
    }
}
I0723 14:17:31.520302   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-proxy:v1.15.0 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.15.0
I0723 14:17:31.520302   25204 cache_images.go:286] Attempting to cache image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 at C:\Users\Pranav.Jituri\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1
I0723 14:17:31.520302   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-scheduler:v1.15.0 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.15.0
I0723 14:17:31.520302   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-controller-manager:v1.15.0 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.15.0
I0723 14:17:31.521299   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-apiserver:v1.15.0 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.15.0
I0723 14:17:31.521299   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/pause:3.1 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\pause_3.1
I0723 14:17:31.540302   25204 cluster.go:96] Skipping create...Using existing machine configuration
I0723 14:17:31.521299   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13
I0723 14:17:31.521299   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/coredns:1.3.1 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\coredns_1.3.1
I0723 14:17:31.521299   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/etcd:3.3.10 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\etcd_3.3.10
I0723 14:17:31.521299   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1
I0723 14:17:31.521299   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/kube-addon-manager:v9.0 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0
*

I0723 14:17:31.521299   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13
! Ignoring --vm-driver=virtualbox, as the existing "minikube" VM was created using the hyperv driver.
I0723 14:17:31.521299   25204 cache_images.go:286] Attempting to cache image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 at C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13
! To switch drivers, you may create a new VM using `minikube start -p <name> --vm-driver=virtualbox`
I0723 14:17:31.561294   25204 cache_images.go:83] Successfully cached all images.
! Alternatively, you may delete the existing VM using `minikube delete -p minikube`
*

[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Off

[stderr =====>] :
I0723 14:17:32.685079   25204 cluster.go:115] Machine state:  Stopped
* Restarting existing hyperv VM for "minikube" ...
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM minikube
[stdout =====>] :
[stderr =====>] :
Waiting for host to start...
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] :
[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
I0723 14:18:19.906173   25204 cluster.go:133] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:}
I0723 14:18:19.906173   25204 cluster.go:152] configureHost: *host.Host &{ConfigVersion:3 Driver:0xc000440ba0 DriverName:hyperv HostOptions:0xc000060dc0 Name:minikube RawDriver:[123 10 32 32 32 32 32 32 32 32 34 73 80 65 100 100 114 101 115 115 34 58 32 34 49 55 50 46 49 55 46 56 56 46 49 53 48 34 44 10 32 32 32 32 32 32 32 32 34 77 97 99 104 105 110 101 78 97 109 101 34 58 32 34 109 105 110 105 107 117 98 101 34 44 10 32 32 32 32 32 32 32 32 34 83 83 72 85 115 101 114 34 58 32 34 100 111 99 107 101 114 34 44 10 32 32 32 32 32 32 32 32 34 83 83 72 80 111 114 116 34 58 32 50 50 44 10 32 32 32 32 32 32 32 32 34 83 83 72 75 101 121 80 97 116 104 34 58 32 34 67 58 92 92 85 115 101 114 115 92 92 80 114 97 110 97 118 46 74 105 116 117 114 105 92 92 46 109 105 110 105 107 117 98 101 92 92 109 97 99 104 105 110 101 115 92 92 109 105 110 105 107 117 98 101 92 92 105 100 95 114 115 97 34 44 10 32 32 32 32 32 32 32 32 34 83 116 111 114 101 80 97 116 104 34 58 32 34 67 58 92 92 85 115 101 114 115 92 92 80 114 97 110 97 118 46 74 105 116 117 114 105 92 92 46 109 105 110 105 107 117 98 101 34 44 10 32 32 32 32 32 32 32 32 34 83 119 97 114 109 77 97 115 116 101 114 34 58 32 102 97 108 115 101 44 10 32 32 32 32 32 32 32 32 34 83 119 97 114 109 72 111 115 116 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 83 119 97 114 109 68 105 115 99 111 118 101 114 121 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 66 111 111 116 50 68 111 99 107 101 114 85 82 76 34 58 32 34 102 105 108 101 58 47 47 67 58 47 85 115 101 114 115 47 80 114 97 110 97 118 46 74 105 116 117 114 105 47 46 109 105 110 105 107 117 98 101 47 99 97 99 104 101 47 105 115 111 47 109 105 110 105 107 117 98 101 45 118 49 46 50 46 48 46 105 115 111 34 44 10 32 32 32 32 32 32 32 32 34 86 83 119 105 116 99 104 34 58 32 34 68 101 102 97 117 108 116 32 83 119 105 116 99 104 34 44 10 32 32 32 32 32 32 32 32 34 68 105 115 107 83 105 122 101 34 58 32 50 48 48 48 48 44 10 32 32 32 32 32 32 32 32 34 77 101 109 83 105 122 101 34 58 32 50 48 48 48 44 10 32 32 32 32 32 32 32 32 34 67 80 85 34 58 32 50 44 10 32 32 32 32 32 32 32 32 34 77 97 99 65 100 100 114 34 58 32 34 34 44 10 32 32 32 32 32 32 32 32 34 86 76 97 110 73 68 34 58 32 48 44 10 32 32 32 32 32 32 32 32 34 68 105 115 97 98 108 101 68 121 110 97 109 105 99 77 101 109 111 114 121 34 58 32 116 114 117 101 10 32 32 32 32 125]}
* Waiting for SSH access ...
I0723 14:18:19.915174   25204 cluster.go:170] Configuring auth for driver hyperv ...
Waiting for SSH to be available...
Getting to WaitForSSH function...
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x879280] 0x879250 <nil>  [] 0s} 172.17.88.150 22 <nil> <nil>}
About to run SSH command:
exit 0
SSH cmd err, output: <nil>:
Detecting the provisioner...
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x879280] 0x879250 <nil>  [] 0s} 172.17.88.150 22 <nil> <nil>}
About to run SSH command:
cat /etc/os-release
SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2018.05.3
ID=buildroot
VERSION_ID=2018.05.3
PRETTY_NAME="Buildroot 2018.05.3"

found compatible host: buildroot
setting hostname "minikube"
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x879280] 0x879250 <nil>  [] 0s} 172.17.88.150 22 <nil> <nil>}
About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
SSH cmd err, output: <nil>: minikube

[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x879280] 0x879250 <nil>  [] 0s} 172.17.88.150 22 <nil> <nil>}
About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                        fi
                fi
SSH cmd err, output: <nil>:
set auth options {CertDir:C:\Users\Pranav.Jituri\.minikube CaCertPath:C:\Users\Pranav.Jituri\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\Pranav.Jituri\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\Pranav.Jituri\.minikube\machines\server.pem ServerKeyPath:C:\Users\Pranav.Jituri\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\Pranav.Jituri\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\Pranav.Jituri\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\Pranav.Jituri\.minikube}
setting up certificates
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
generating server cert: C:\Users\Pranav.Jituri\.minikube\machines\server.pem ca-key=C:\Users\Pranav.Jituri\.minikube\certs\ca.pem private-key=C:\Users\Pranav.Jituri\.minikube\certs\ca-key.pem org=Pranav.Jituri.minikube san=[172.17.88.150 localhost]
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
I0723 14:18:31.479238   25204 ssh_runner.go:101] SSH: sudo rm -f /etc/docker/ca.pem
I0723 14:18:31.527237   25204 ssh_runner.go:101] SSH: sudo mkdir -p /etc/docker
I0723 14:18:31.538238   25204 ssh_runner.go:182] Transferring 1054 bytes to ca.pem
I0723 14:18:31.539503   25204 ssh_runner.go:195] ca.pem: copied 1054 bytes
I0723 14:18:31.544236   25204 ssh_runner.go:101] SSH: sudo rm -f /etc/docker/server.pem
I0723 14:18:31.559233   25204 ssh_runner.go:101] SSH: sudo mkdir -p /etc/docker
I0723 14:18:31.573248   25204 ssh_runner.go:182] Transferring 1131 bytes to server.pem
I0723 14:18:31.576247   25204 ssh_runner.go:195] server.pem: copied 1131 bytes
I0723 14:18:31.586274   25204 ssh_runner.go:101] SSH: sudo rm -f /etc/docker/server-key.pem
I0723 14:18:31.597238   25204 ssh_runner.go:101] SSH: sudo mkdir -p /etc/docker
I0723 14:18:31.613236   25204 ssh_runner.go:182] Transferring 1675 bytes to server-key.pem
I0723 14:18:31.621248   25204 ssh_runner.go:195] server-key.pem: copied 1675 bytes
Setting Docker configuration on the remote daemon...
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x879280] 0x879250 <nil>  [] 0s} 172.17.88.150 22 <nil> <nil>}
About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket

[Service]
Type=notify

# DOCKER_RAMDISK disables pivot_root in Docker, using MS_MOVE instead.
Environment=DOCKER_RAMDISK=yes


# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service
SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket

[Service]
Type=notify

# DOCKER_RAMDISK disables pivot_root in Docker, using MS_MOVE instead.
Environment=DOCKER_RAMDISK=yes


# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

setting minikube options for container-runtime
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x879280] 0x879250 <nil>  [] 0s} 172.17.88.150 22 <nil> <nil>}
About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x879280] 0x879250 <nil>  [] 0s} 172.17.88.150 22 <nil> <nil>}
About to run SSH command:
sudo systemctl daemon-reload
SSH cmd err, output: <nil>:
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x879280] 0x879250 <nil>  [] 0s} 172.17.88.150 22 <nil> <nil>}
About to run SSH command:
sudo systemctl -f restart crio
SSH cmd err, output: <nil>:
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x879280] 0x879250 <nil>  [] 0s} 172.17.88.150 22 <nil> <nil>}
About to run SSH command:
date +%s.%N
SSH cmd err, output: <nil>: 1563871721.658982800

I0723 14:18:41.660198   25204 cluster.go:204] guest clock: 1563871721.658982800
I0723 14:18:41.662211   25204 cluster.go:217] Guest: 2019-07-23 14:18:41.6589828 +0530 IST Remote: 2019-07-23 14:18:39.6831379 +0530 IST m=+73.885105901 (delta=1.9758449s)
I0723 14:18:41.663202   25204 cluster.go:188] guest clock delta is within tolerance: 1.9758449s
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
I0723 14:18:43.703244   25204 start.go:868] Saving config:
{
    "MachineConfig": {
        "KeepContext": false,
        "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v1.2.0.iso",
        "Memory": 2000,
        "CPUs": 2,
        "DiskSize": 20000,
        "VMDriver": "virtualbox",
        "ContainerRuntime": "docker",
        "HyperkitVpnKitSock": "",
        "HyperkitVSockPorts": [],
        "DockerEnv": null,
        "InsecureRegistry": null,
        "RegistryMirror": null,
        "HostOnlyCIDR": "192.168.99.1/24",
        "HypervVirtualSwitch": "",
        "KVMNetwork": "default",
        "KVMQemuURI": "qemu:///system",
        "KVMGPU": false,
        "KVMHidden": false,
        "DockerOpt": null,
        "DisableDriverMounts": false,
        "NFSShare": [],
        "NFSSharesRoot": "/nfsshares",
        "UUID": "",
        "NoVTXCheck": false,
        "DNSProxy": false,
        "HostDNSResolver": true
    },
    "KubernetesConfig": {
        "KubernetesVersion": "v1.15.0",
        "NodeIP": "172.17.88.150",
        "NodePort": 8443,
        "NodeName": "minikube",
        "APIServerName": "minikubeCA",
        "APIServerNames": null,
        "APIServerIPs": null,
        "DNSDomain": "cluster.local",
        "ContainerRuntime": "docker",
        "CRISocket": "",
        "NetworkPlugin": "",
        "FeatureGates": "",
        "ServiceCIDR": "10.96.0.0/12",
        "ImageRepository": "",
        "ExtraOptions": null,
        "ShouldLoadCachedImages": true,
        "EnableDefaultCNI": false
    }
}
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
I0723 14:18:45.695340   25204 ssh_runner.go:101] SSH: systemctl is-active --quiet service containerd
I0723 14:18:45.746343   25204 ssh_runner.go:101] SSH: systemctl is-active --quiet service crio
I0723 14:18:45.754365   25204 ssh_runner.go:101] SSH: sudo systemctl stop crio
I0723 14:18:45.806343   25204 ssh_runner.go:101] SSH: systemctl is-active --quiet service crio
I0723 14:18:45.819346   25204 ssh_runner.go:101] SSH: sudo systemctl start docker
I0723 14:18:47.266421   25204 ssh_runner.go:137] Run with output: docker version --format '{{.Server.Version}}'
I0723 14:18:47.303783   25204 utils.go:227] > 18.09.6
* Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.15.0
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.15.0
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.15.0
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.15.0
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\etcd_3.3.10
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\coredns_1.3.1
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\pause_3.1
I0723 14:18:49.118864   25204 cache_images.go:199] Loading image from cache: C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13
I0723 14:18:49.135899   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/storage-provisioner_v1.8.1
I0723 14:18:49.135899   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/k8s-dns-kube-dns-amd64_1.14.13
I0723 14:18:49.136898   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-apiserver_v1.15.0
I0723 14:18:49.138866   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-proxy_v1.15.0
I0723 14:18:49.138866   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-controller-manager_v1.15.0
I0723 14:18:49.154906   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-scheduler_v1.15.0
I0723 14:18:49.159865   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/etcd_3.3.10
I0723 14:18:49.159865   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/kubernetes-dashboard-amd64_v1.10.1
I0723 14:18:49.159865   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0723 14:18:49.179872   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/coredns_1.3.1
I0723 14:18:49.181869   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-addon-manager_v9.0
I0723 14:18:49.181869   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/k8s-dns-sidecar-amd64_1.14.13
I0723 14:18:49.182865   25204 ssh_runner.go:101] SSH: sudo rm -f /tmp/pause_3.1
I0723 14:18:49.188867   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.225868   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.231867   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.237881   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.246451   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.249453   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.258500   25204 ssh_runner.go:182] Transferring 14267904 bytes to k8s-dns-kube-dns-amd64_1.14.13
I0723 14:18:49.260452   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.291447   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.293460   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.294449   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.294449   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.294449   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.300502   25204 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
I0723 14:18:49.338555   25204 ssh_runner.go:182] Transferring 30113280 bytes to kube-proxy_v1.15.0
I0723 14:18:49.342560   25204 ssh_runner.go:182] Transferring 30522368 bytes to kube-addon-manager_v9.0
I0723 14:18:49.342560   25204 ssh_runner.go:182] Transferring 29871616 bytes to kube-scheduler_v1.15.0
I0723 14:18:49.342560   25204 ssh_runner.go:182] Transferring 47842304 bytes to kube-controller-manager_v1.15.0
I0723 14:18:49.343555   25204 ssh_runner.go:182] Transferring 49275904 bytes to kube-apiserver_v1.15.0
I0723 14:18:49.345560   25204 ssh_runner.go:182] Transferring 20683776 bytes to storage-provisioner_v1.8.1
I0723 14:18:49.401557   25204 ssh_runner.go:182] Transferring 76164608 bytes to etcd_3.3.10
I0723 14:18:49.423099   25204 ssh_runner.go:182] Transferring 12306944 bytes to coredns_1.3.1
I0723 14:18:49.423099   25204 ssh_runner.go:182] Transferring 11769344 bytes to k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0723 14:18:49.427091   25204 ssh_runner.go:182] Transferring 318976 bytes to pause_3.1
I0723 14:18:49.427091   25204 ssh_runner.go:182] Transferring 44910592 bytes to kubernetes-dashboard-amd64_v1.10.1
I0723 14:18:49.430622   25204 ssh_runner.go:182] Transferring 12207616 bytes to k8s-dns-sidecar-amd64_1.14.13
I0723 14:18:49.505272   25204 ssh_runner.go:195] pause_3.1: copied 318976 bytes
I0723 14:18:49.522276   25204 docker.go:97] Loading image: /tmp/pause_3.1
I0723 14:18:49.523271   25204 ssh_runner.go:101] SSH: docker load -i /tmp/pause_3.1
I0723 14:18:49.754273   25204 utils.go:227] > Loaded image: k8s.gcr.io/pause:3.1
I0723 14:18:49.758273   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/pause_3.1
I0723 14:18:49.797268   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\pause_3.1 from cache
I0723 14:18:50.443401   25204 ssh_runner.go:195] k8s-dns-kube-dns-amd64_1.14.13: copied 14267904 bytes
I0723 14:18:50.463399   25204 docker.go:97] Loading image: /tmp/k8s-dns-kube-dns-amd64_1.14.13
I0723 14:18:50.463399   25204 ssh_runner.go:101] SSH: docker load -i /tmp/k8s-dns-kube-dns-amd64_1.14.13
I0723 14:18:50.621403   25204 ssh_runner.go:195] k8s-dns-dnsmasq-nanny-amd64_1.14.13: copied 11769344 bytes
I0723 14:18:50.675398   25204 ssh_runner.go:195] k8s-dns-sidecar-amd64_1.14.13: copied 12207616 bytes
I0723 14:18:50.791965   25204 ssh_runner.go:195] coredns_1.3.1: copied 12306944 bytes
I0723 14:18:51.096026   25204 utils.go:227] > Loaded image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13
I0723 14:18:51.105586   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.13
I0723 14:18:51.161580   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13 from cache
I0723 14:18:51.161580   25204 docker.go:97] Loading image: /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0723 14:18:51.166584   25204 ssh_runner.go:101] SSH: docker load -i /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0723 14:18:51.436169   25204 ssh_runner.go:195] storage-provisioner_v1.8.1: copied 20683776 bytes
I0723 14:18:51.642802   25204 utils.go:227] > Loaded image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13
I0723 14:18:51.657796   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0723 14:18:51.691816   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13 from cache
I0723 14:18:51.691816   25204 docker.go:97] Loading image: /tmp/k8s-dns-sidecar-amd64_1.14.13
I0723 14:18:51.692802   25204 ssh_runner.go:101] SSH: docker load -i /tmp/k8s-dns-sidecar-amd64_1.14.13
I0723 14:18:52.050079   25204 ssh_runner.go:195] kube-scheduler_v1.15.0: copied 29871616 bytes
I0723 14:18:52.057068   25204 ssh_runner.go:195] kube-proxy_v1.15.0: copied 30113280 bytes
I0723 14:18:52.089070   25204 ssh_runner.go:195] kube-addon-manager_v9.0: copied 30522368 bytes
I0723 14:18:52.124077   25204 utils.go:227] > Loaded image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13
I0723 14:18:52.143078   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.13
I0723 14:18:52.180084   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13 from cache
I0723 14:18:52.181068   25204 docker.go:97] Loading image: /tmp/coredns_1.3.1
I0723 14:18:52.185076   25204 ssh_runner.go:101] SSH: docker load -i /tmp/coredns_1.3.1
I0723 14:18:52.690346   25204 utils.go:227] > Loaded image: k8s.gcr.io/coredns:1.3.1
I0723 14:18:52.707455   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/coredns_1.3.1
I0723 14:18:52.757453   25204 ssh_runner.go:195] kubernetes-dashboard-amd64_v1.10.1: copied 44910592 bytes
I0723 14:18:52.773454   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\coredns_1.3.1 from cache
I0723 14:18:52.774452   25204 docker.go:97] Loading image: /tmp/storage-provisioner_v1.8.1
I0723 14:18:52.774452   25204 ssh_runner.go:101] SSH: docker load -i /tmp/storage-provisioner_v1.8.1
I0723 14:18:52.912222   25204 ssh_runner.go:195] kube-apiserver_v1.15.0: copied 49275904 bytes
I0723 14:18:52.928305   25204 ssh_runner.go:195] kube-controller-manager_v1.15.0: copied 47842304 bytes
I0723 14:18:53.109304   25204 utils.go:227] > Loaded image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
I0723 14:18:53.127311   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/storage-provisioner_v1.8.1
I0723 14:18:53.173302   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1 from cache
I0723 14:18:53.173302   25204 docker.go:97] Loading image: /tmp/kube-scheduler_v1.15.0
I0723 14:18:53.176346   25204 ssh_runner.go:101] SSH: docker load -i /tmp/kube-scheduler_v1.15.0
I0723 14:18:53.226302   25204 ssh_runner.go:195] etcd_3.3.10: copied 76164608 bytes
I0723 14:18:53.409832   25204 utils.go:227] > Loaded image: k8s.gcr.io/kube-scheduler:v1.15.0
I0723 14:18:53.419430   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-scheduler_v1.15.0
I0723 14:18:53.432427   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.15.0 from cache
I0723 14:18:53.433427   25204 docker.go:97] Loading image: /tmp/kube-proxy_v1.15.0
I0723 14:18:53.438426   25204 ssh_runner.go:101] SSH: docker load -i /tmp/kube-proxy_v1.15.0
I0723 14:18:53.646602   25204 utils.go:227] > Loaded image: k8s.gcr.io/kube-proxy:v1.15.0
I0723 14:18:53.651602   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-proxy_v1.15.0
I0723 14:18:53.660601   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.15.0 from cache
I0723 14:18:53.661599   25204 docker.go:97] Loading image: /tmp/kube-addon-manager_v9.0
I0723 14:18:53.661599   25204 ssh_runner.go:101] SSH: docker load -i /tmp/kube-addon-manager_v9.0
I0723 14:18:53.840710   25204 utils.go:227] > Loaded image: k8s.gcr.io/kube-addon-manager:v9.0
I0723 14:18:53.847720   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-addon-manager_v9.0
I0723 14:18:53.858714   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0 from cache
I0723 14:18:53.859713   25204 docker.go:97] Loading image: /tmp/kubernetes-dashboard-amd64_v1.10.1
I0723 14:18:53.859713   25204 ssh_runner.go:101] SSH: docker load -i /tmp/kubernetes-dashboard-amd64_v1.10.1
I0723 14:18:54.084474   25204 utils.go:227] > Loaded image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
I0723 14:18:54.092476   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1
I0723 14:18:54.105472   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1 from cache
I0723 14:18:54.105472   25204 docker.go:97] Loading image: /tmp/kube-apiserver_v1.15.0
I0723 14:18:54.105472   25204 ssh_runner.go:101] SSH: docker load -i /tmp/kube-apiserver_v1.15.0
I0723 14:18:54.334117   25204 utils.go:227] > Loaded image: k8s.gcr.io/kube-apiserver:v1.15.0
I0723 14:18:54.342115   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-apiserver_v1.15.0
I0723 14:18:54.357118   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.15.0 from cache
I0723 14:18:54.357118   25204 docker.go:97] Loading image: /tmp/kube-controller-manager_v1.15.0
I0723 14:18:54.358116   25204 ssh_runner.go:101] SSH: docker load -i /tmp/kube-controller-manager_v1.15.0
I0723 14:18:54.631268   25204 utils.go:227] > Loaded image: k8s.gcr.io/kube-controller-manager:v1.15.0
I0723 14:18:54.639271   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-controller-manager_v1.15.0
I0723 14:18:54.651273   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.15.0 from cache
I0723 14:18:54.651273   25204 docker.go:97] Loading image: /tmp/etcd_3.3.10
I0723 14:18:54.652271   25204 ssh_runner.go:101] SSH: docker load -i /tmp/etcd_3.3.10
I0723 14:18:54.987776   25204 utils.go:227] > Loaded image: k8s.gcr.io/etcd:3.3.10
I0723 14:18:54.999808   25204 ssh_runner.go:101] SSH: sudo rm -rf /tmp/etcd_3.3.10
I0723 14:18:55.012769   25204 cache_images.go:228] Successfully loaded image C:\Users\Pranav.Jituri\.minikube\cache\images\k8s.gcr.io\etcd_3.3.10 from cache
I0723 14:18:55.013810   25204 cache_images.go:110] Successfully loaded all cached images.
I0723 14:18:55.015777   25204 kubeadm.go:502] kubelet v1.15.0 config:

[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests

[Install]
I0723 14:18:55.017771   25204 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubelet
I0723 14:18:55.018819   25204 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubeadm
I0723 14:18:55.035824   25204 ssh_runner.go:101] SSH: sudo rm -f /usr/bin/kubelet
I0723 14:18:55.035824   25204 ssh_runner.go:101] SSH: sudo rm -f /usr/bin/kubeadm
I0723 14:18:55.042780   25204 ssh_runner.go:101] SSH: sudo mkdir -p /usr/bin
I0723 14:18:55.042780   25204 ssh_runner.go:101] SSH: sudo mkdir -p /usr/bin
I0723 14:18:55.058774   25204 ssh_runner.go:182] Transferring 119612544 bytes to kubelet
I0723 14:18:55.070775   25204 ssh_runner.go:182] Transferring 40169856 bytes to kubeadm
I0723 14:18:55.702513   25204 ssh_runner.go:195] kubeadm: copied 40169856 bytes
I0723 14:18:56.260152   25204 ssh_runner.go:195] kubelet: copied 119612544 bytes
I0723 14:18:56.268117   25204 ssh_runner.go:101] SSH: sudo rm -f /lib/systemd/system/kubelet.service
I0723 14:18:56.275122   25204 ssh_runner.go:101] SSH: sudo mkdir -p /lib/systemd/system
I0723 14:18:56.286123   25204 ssh_runner.go:182] Transferring 324 bytes to kubelet.service
I0723 14:18:56.287122   25204 ssh_runner.go:195] kubelet.service: copied 324 bytes
I0723 14:18:56.297141   25204 ssh_runner.go:101] SSH: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0723 14:18:56.307121   25204 ssh_runner.go:101] SSH: sudo mkdir -p /etc/systemd/system/kubelet.service.d
I0723 14:18:56.320143   25204 ssh_runner.go:182] Transferring 473 bytes to 10-kubeadm.conf
I0723 14:18:56.326162   25204 ssh_runner.go:195] 10-kubeadm.conf: copied 473 bytes
I0723 14:18:56.338125   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/kubeadm.yaml
I0723 14:18:56.345126   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib
I0723 14:18:56.361134   25204 ssh_runner.go:182] Transferring 1138 bytes to kubeadm.yaml
I0723 14:18:56.365124   25204 ssh_runner.go:195] kubeadm.yaml: copied 1138 bytes
I0723 14:18:56.370121   25204 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/addons/dashboard-dp.yaml
I0723 14:18:56.379124   25204 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/addons
I0723 14:18:56.392136   25204 ssh_runner.go:182] Transferring 1570 bytes to dashboard-dp.yaml
I0723 14:18:56.393118   25204 ssh_runner.go:195] dashboard-dp.yaml: copied 1570 bytes
I0723 14:18:56.405125   25204 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/addons/dashboard-svc.yaml
I0723 14:18:56.415135   25204 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/addons
I0723 14:18:56.432138   25204 ssh_runner.go:182] Transferring 979 bytes to dashboard-svc.yaml
I0723 14:18:56.435137   25204 ssh_runner.go:195] dashboard-svc.yaml: copied 979 bytes
I0723 14:18:56.453129   25204 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0723 14:18:56.460654   25204 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/addons
I0723 14:18:56.473650   25204 ssh_runner.go:182] Transferring 1709 bytes to storage-provisioner.yaml
I0723 14:18:56.474646   25204 ssh_runner.go:195] storage-provisioner.yaml: copied 1709 bytes
I0723 14:18:56.486651   25204 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/manifests/addon-manager.yaml.tmpl
I0723 14:18:56.495647   25204 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/manifests/
I0723 14:18:56.513185   25204 ssh_runner.go:182] Transferring 1406 bytes to addon-manager.yaml.tmpl
I0723 14:18:56.517226   25204 ssh_runner.go:195] addon-manager.yaml.tmpl: copied 1406 bytes
I0723 14:18:56.524185   25204 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0723 14:18:56.537183   25204 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/addons
I0723 14:18:56.549206   25204 ssh_runner.go:182] Transferring 271 bytes to storageclass.yaml
I0723 14:18:56.556216   25204 ssh_runner.go:195] storageclass.yaml: copied 271 bytes
I0723 14:18:56.570185   25204 ssh_runner.go:101] SSH:
sudo systemctl daemon-reload &&
sudo systemctl start kubelet
I0723 14:18:56.683188   25204 certs.go:48] Setting up certificates for IP: 172.17.88.150
I0723 14:18:56.802294   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/ca.crt
I0723 14:18:56.809296   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0723 14:18:56.820297   25204 ssh_runner.go:182] Transferring 1066 bytes to ca.crt
I0723 14:18:56.821300   25204 ssh_runner.go:195] ca.crt: copied 1066 bytes
I0723 14:18:56.829898   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/ca.key
I0723 14:18:56.842902   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0723 14:18:56.855897   25204 ssh_runner.go:182] Transferring 1675 bytes to ca.key
I0723 14:18:56.856894   25204 ssh_runner.go:195] ca.key: copied 1675 bytes
I0723 14:18:56.868918   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/apiserver.crt
I0723 14:18:56.880530   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0723 14:18:56.902529   25204 ssh_runner.go:182] Transferring 1298 bytes to apiserver.crt
I0723 14:18:56.906532   25204 ssh_runner.go:195] apiserver.crt: copied 1298 bytes
I0723 14:18:56.917529   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/apiserver.key
I0723 14:18:56.925530   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0723 14:18:56.942056   25204 ssh_runner.go:182] Transferring 1675 bytes to apiserver.key
I0723 14:18:56.950059   25204 ssh_runner.go:195] apiserver.key: copied 1675 bytes
I0723 14:18:56.957053   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.crt
I0723 14:18:56.976076   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0723 14:18:56.989580   25204 ssh_runner.go:182] Transferring 1074 bytes to proxy-client-ca.crt
I0723 14:18:56.990581   25204 ssh_runner.go:195] proxy-client-ca.crt: copied 1074 bytes
I0723 14:18:57.002584   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.key
I0723 14:18:57.012596   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0723 14:18:57.025606   25204 ssh_runner.go:182] Transferring 1675 bytes to proxy-client-ca.key
I0723 14:18:57.028582   25204 ssh_runner.go:195] proxy-client-ca.key: copied 1675 bytes
I0723 14:18:57.040582   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client.crt
I0723 14:18:57.053584   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0723 14:18:57.065581   25204 ssh_runner.go:182] Transferring 1103 bytes to proxy-client.crt
I0723 14:18:57.068605   25204 ssh_runner.go:195] proxy-client.crt: copied 1103 bytes
I0723 14:18:57.082590   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client.key
I0723 14:18:57.092581   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0723 14:18:57.105583   25204 ssh_runner.go:182] Transferring 1675 bytes to proxy-client.key
I0723 14:18:57.106579   25204 ssh_runner.go:195] proxy-client.key: copied 1675 bytes
I0723 14:18:57.118586   25204 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/kubeconfig
I0723 14:18:57.132601   25204 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube
I0723 14:18:57.142579   25204 ssh_runner.go:182] Transferring 428 bytes to kubeconfig
I0723 14:18:57.144617   25204 ssh_runner.go:195] kubeconfig: copied 428 bytes
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
I0723 14:18:59.072191   25204 kubeconfig.go:127] Using kubeconfig:  C:\Users\Pranav.Jituri/.kube/config
* Relaunching Kubernetes v1.15.0 using kubeadm ...
I0723 14:18:59.094208   25204 ssh_runner.go:101] SSH: sudo kubeadm init phase certs all --config /var/lib/kubeadm.yaml
I0723 14:18:59.147185   25204 utils.go:227] > [certs] Using certificateDir folder "/var/lib/minikube/certs/"
I0723 14:18:59.150189   25204 utils.go:227] > [certs] Using existing etcd/ca certificate authority
I0723 14:18:59.151184   25204 utils.go:227] > [certs] Using existing etcd/server certificate and key on disk
I0723 14:18:59.152185   25204 utils.go:227] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0723 14:18:59.154187   25204 utils.go:227] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I0723 14:18:59.155186   25204 utils.go:227] > [certs] Using existing etcd/peer certificate and key on disk
I0723 14:18:59.156183   25204 utils.go:227] > [certs] Using existing ca certificate authority
I0723 14:18:59.156183   25204 utils.go:227] > [certs] Using existing apiserver certificate and key on disk
I0723 14:18:59.160205   25204 utils.go:227] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0723 14:18:59.165203   25204 utils.go:227] > [certs] Using existing front-proxy-ca certificate authority
I0723 14:18:59.166183   25204 utils.go:227] > [certs] Using existing front-proxy-client certificate and key on disk
I0723 14:18:59.167185   25204 utils.go:227] > [certs] Using the existing "sa" key
I0723 14:18:59.168719   25204 ssh_runner.go:101] SSH: sudo kubeadm init phase kubeconfig all --config /var/lib/kubeadm.yaml
I0723 14:18:59.215714   25204 utils.go:227] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0723 14:18:59.341607   25204 utils.go:227] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0723 14:18:59.604095   25204 utils.go:227] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0723 14:19:00.018195   25204 utils.go:227] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0723 14:19:00.474314   25204 utils.go:227] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0723 14:19:00.478308   25204 ssh_runner.go:101] SSH: sudo kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml
I0723 14:19:00.517316   25204 utils.go:227] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0723 14:19:00.518314   25204 utils.go:227] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0723 14:19:00.530336   25204 utils.go:227] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0723 14:19:00.532403   25204 utils.go:227] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0723 14:19:00.537315   25204 ssh_runner.go:101] SSH: sudo kubeadm init phase etcd local --config /var/lib/kubeadm.yaml
I0723 14:19:00.597310   25204 utils.go:227] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0723 14:19:00.611313   25204 kubeadm.go:382] Waiting for apiserver ...
I0723 14:19:01.618497   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:01.619499   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:02.923363   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:02.924235   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:04.125071   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:04.125939   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:05.325634   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:05.326603   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:06.525507   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:06.526386   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:07.725059   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:07.725932   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:08.922667   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:08.923535   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:10.129053   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:10.129053   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:11.323131   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:11.323131   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:12.523338   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:12.524253   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:13.723406   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:13.723406   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:14.923987   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:14.924862   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:16.124010   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:16.124973   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:17.322210   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:17.323120   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:18.525878   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:18.526748   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:19.723721   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:19.723721   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:20.925367   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:20.926369   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:22.123865   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: Get https://172.17.88.150:8443/healthz: dial tcp 172.17.88.150:8443: connectex: No connection could be made because the target machine actively refused it. <nil>
I0723 14:19:22.123865   25204 kubeadm.go:385] apiserver status: Stopped, err: <nil>
I0723 14:19:29.240790   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: <nil> &{Status:403 Forbidden StatusCode:403 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[192] Content-Type:[application/json] Date:[Tue, 23 Jul 2019 08:49:29 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00007fd80 ContentLength:192 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036f900 TLS:0xc00001a840}
I0723 14:19:29.241786   25204 kubeadm.go:385] apiserver status: Error, err: <nil>
I0723 14:19:29.543383   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: <nil> &{Status:500 Internal Server Error StatusCode:500 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[856] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 23 Jul 2019 08:49:29 GMT] X-Content-Type-Options:[nosniff]] Body:0xc000020c00 ContentLength:856 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036fa00 TLS:0xc000022dc0}
I0723 14:19:29.544374   25204 kubeadm.go:385] apiserver status: Error, err: <nil>
I0723 14:19:29.829380   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: <nil> &{Status:500 Internal Server Error StatusCode:500 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[856] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 23 Jul 2019 08:49:29 GMT] X-Content-Type-Options:[nosniff]] Body:0xc000020cc0 ContentLength:856 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003e7000 TLS:0xc000022e70}
I0723 14:19:29.831375   25204 kubeadm.go:385] apiserver status: Error, err: <nil>
I0723 14:19:30.144158   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: <nil> &{Status:500 Internal Server Error StatusCode:500 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[814] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 23 Jul 2019 08:49:30 GMT] X-Content-Type-Options:[nosniff]] Body:0xc0006c0600 ContentLength:814 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036fc00 TLS:0xc00001abb0}
I0723 14:19:30.145155   25204 kubeadm.go:385] apiserver status: Error, err: <nil>
I0723 14:19:30.445874   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: <nil> &{Status:500 Internal Server Error StatusCode:500 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[814] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 23 Jul 2019 08:49:30 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00045dd80 ContentLength:814 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003e7100 TLS:0xc00001ac60}
I0723 14:19:30.446869   25204 kubeadm.go:385] apiserver status: Error, err: <nil>
I0723 14:19:30.734567   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: <nil> &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 23 Jul 2019 08:49:30 GMT] X-Content-Type-Options:[nosniff]] Body:0xc0006c06c0 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003e7200 TLS:0xc00001ad10}
I0723 14:19:30.734567   25204 kubeadm.go:385] apiserver status: Running, err: <nil>
I0723 14:19:30.737579   25204 ssh_runner.go:101] SSH: sudo kubeadm init phase addon all --config /var/lib/kubeadm.yaml
I0723 14:19:30.943704   25204 utils.go:227] > [addons] Applied essential addon: CoreDNS
I0723 14:19:31.087238   25204 utils.go:227] > [addons] Applied essential addon: kube-proxy
I0723 14:19:31.089258   25204 ssh_runner.go:137] Run with output: cat /proc/$(pgrep kube-apiserver)/oom_adj
I0723 14:19:31.112239   25204 utils.go:227] > 16
I0723 14:19:31.112239   25204 kubeadm.go:258] apiserver oom_adj: 16
I0723 14:19:31.116243   25204 kubeadm.go:263] adjusting apiserver oom_adj to -10
I0723 14:19:31.117240   25204 ssh_runner.go:101] SSH: echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj
I0723 14:19:31.182238   25204 utils.go:227] > -10
* Verifying:I0723 14:19:31.193236   25204 loader.go:359] Config loaded from file C:\Users\Pranav.Jituri/.kube/config
 apiserverI0723 14:19:31.234238   25204 kubeadm.go:382] Waiting for apiserver ...
I0723 14:19:31.293248   25204 kubeadm.go:143] https://172.17.88.150:8443/healthz response: <nil> &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 23 Jul 2019 08:49:31 GMT] X-Content-Type-Options:[nosniff]] Body:0xc0000208c0 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003e6600 TLS:0xc00001a000}
I0723 14:19:31.295258   25204 kubeadm.go:385] apiserver status: Running, err: <nil>
 proxyI0723 14:19:31.303235   25204 kubernetes.go:125] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ...
I0723 14:19:31.305234   25204 round_trippers.go:383] GET https://172.17.88.150:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy
I0723 14:19:31.305234   25204 round_trippers.go:390] Request Headers:
I0723 14:19:31.305234   25204 round_trippers.go:393]     Accept: application/json, */*
I0723 14:19:31.305234   25204 round_trippers.go:393]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0723 14:19:31.337240   25204 round_trippers.go:408] Response Status: 200 OK in 31 milliseconds
I0723 14:19:31.337240   25204 round_trippers.go:411] Response Headers:
I0723 14:19:31.339245   25204 round_trippers.go:414]     Content-Type: application/json
I0723 14:19:31.340245   25204 round_trippers.go:414]     Date: Tue, 23 Jul 2019 08:49:31 GMT
I0723 14:19:31.341242   25204 request.go:897] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/kube-system/pods","resourceVersion":"74254"},"items":[{"metadata":{"name":"kube-proxy-62m5k","generateName":"kube-proxy-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/kube-proxy-62m5k","uid":"a6a89899-58be-40d5-aeaa-af07a0873c2c","resourceVersion":"71340","creationTimestamp":"2019-07-12T08:31:51Z","labels":{"controller-revision-hash":"7bdbc788b8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d23fd4f-5832-41e2-98fd-b122b6854c90","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"kube-proxy","configMap":{"name":"kube-proxy","defaultMode":420}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock","type":"FileOrCreate"}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"kube-proxy-token-knrpk","secret":{"secretName":"kube-proxy-token-knrpk","defaultMod [truncated 3209 chars]
I0723 14:19:31.361240   25204 kubernetes.go:136] Found 1 Pods for label selector k8s-app=kube-proxy
 etcdI0723 14:19:31.363243   25204 kubernetes.go:125] Waiting for pod with label "kube-system" in ns "component=etcd" ...
I0723 14:19:31.364268   25204 round_trippers.go:383] GET https://172.17.88.150:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd
I0723 14:19:31.364268   25204 round_trippers.go:390] Request Headers:
I0723 14:19:31.365256   25204 round_trippers.go:393]     Accept: application/json, */*
I0723 14:19:31.372271   25204 round_trippers.go:393]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0723 14:19:31.416235   25204 round_trippers.go:408] Response Status: 200 OK in 38 milliseconds
I0723 14:19:31.418251   25204 round_trippers.go:411] Response Headers:
I0723 14:19:31.419237   25204 round_trippers.go:414]     Content-Type: application/json
I0723 14:19:31.419237   25204 round_trippers.go:414]     Content-Length: 3657
I0723 14:19:31.420242   25204 round_trippers.go:414]     Date: Tue, 23 Jul 2019 08:49:31 GMT
I0723 14:19:31.421237   25204 request.go:897] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/kube-system/pods","resourceVersion":"74254"},"items":[{"metadata":{"name":"etcd-minikube","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/etcd-minikube","uid":"f73f9a71-36f9-4222-885e-8ad1746cb455","resourceVersion":"71319","creationTimestamp":"2019-07-22T20:44:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d16cee3b4c18c78efd508db5edbe0302","kubernetes.io/config.mirror":"d16cee3b4c18c78efd508db5edbe0302","kubernetes.io/config.seen":"2019-07-22T20:44:02.7035721Z","kubernetes.io/config.source":"file"}},"spec":{"volumes":[{"name":"etcd-certs","hostPath":{"path":"/var/lib/minikube/certs//etcd","type":"DirectoryOrCreate"}},{"name":"etcd-data","hostPath":{"path":"/data/minikube","type":"DirectoryOrCreate"}}],"containers":[{"name":"etcd","image":"k8s.gcr.io/etcd:3.3.10","command":["etcd","--advertise-client-urls=https://172.17.88.150:2379","--cert-file=/var/l [truncated 2633 chars]
I0723 14:19:31.423239   25204 kubernetes.go:136] Found 1 Pods for label selector component=etcd
 schedulerI0723 14:19:31.423239   25204 kubernetes.go:125] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ...
I0723 14:19:31.424241   25204 round_trippers.go:383] GET https://172.17.88.150:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler
I0723 14:19:31.424241   25204 round_trippers.go:393]     Accept: application/json, */*
I0723 14:19:31.426242   25204 round_trippers.go:393]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0723 14:19:31.476781   25204 round_trippers.go:408] Response Status: 200 OK in 50 milliseconds
I0723 14:19:31.476781   25204 round_trippers.go:411] Response Headers:
I0723 14:19:31.479784   25204 round_trippers.go:414]     Content-Type: application/json
I0723 14:19:31.479784   25204 round_trippers.go:414]     Content-Length: 3004
I0723 14:19:31.480779   25204 round_trippers.go:414]     Date: Tue, 23 Jul 2019 08:49:31 GMT
I0723 14:19:31.481778   25204 request.go:897] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/kube-system/pods","resourceVersion":"74254"},"items":[{"metadata":{"nam
e":"kube-scheduler-minikube","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube","uid":"08969c0e-d537-4b88-b24e-74584ad79694","resourceVersion":"74226","creationTim
estamp":"2019-07-12T08:32:42Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"31d9ee8b7fb12e797dc981a8686f6b2b","kubernetes.io/config.mirror":"31d9ee8b
7fb12e797dc981a8686f6b2b","kubernetes.io/config.seen":"2019-07-12T08:31:06.8782529Z","kubernetes.io/config.source":"file"}},"spec":{"volumes":[{"name":"kubeconfig","hostPath":{"path":"/etc/kubernetes/scheduler.c
onf","type":"FileOrCreate"}}],"containers":[{"name":"kube-scheduler","image":"k8s.gcr.io/kube-scheduler:v1.15.0","command":["kube-scheduler","--bind-address=127.0.0.1","--kubeconfig=/etc/kubernetes/scheduler.con
f","--leader-elect=true"],"res [truncated 1980 chars]
I0723 14:19:31.485797   25204 kubernetes.go:136] Found 1 Pods for label selector component=kube-scheduler
 controllerI0723 14:19:31.491359   25204 kubernetes.go:125] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ...
I0723 14:19:31.493364   25204 round_trippers.go:383] GET https://172.17.88.150:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager
I0723 14:19:31.496399   25204 round_trippers.go:390] Request Headers:
I0723 14:19:31.497384   25204 round_trippers.go:393]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0723 14:19:31.497384   25204 round_trippers.go:393]     Accept: application/json, */*
I0723 14:19:31.551363   25204 round_trippers.go:408] Response Status: 200 OK in 52 milliseconds
I0723 14:19:31.552364   25204 round_trippers.go:411] Response Headers:
I0723 14:19:31.554370   25204 round_trippers.go:414]     Content-Type: application/json
I0723 14:19:31.554370   25204 round_trippers.go:414]     Date: Tue, 23 Jul 2019 08:49:31 GMT
I0723 14:19:31.555366   25204 request.go:897] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/kube-system/pods","resourceVersion":"74255"},"items":[{"metadata":{"nam
e":"kube-controller-manager-minikube","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/kube-controller-manager-minikube","uid":"c60c26ff-dc1e-4783-a733-a3b5b6defc3e","resourceVersion":"7
4230","creationTimestamp":"2019-07-13T18:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"676a8a1e3e146d0c0f7c4f6e1e96b578","kubernetes.
io/config.mirror":"676a8a1e3e146d0c0f7c4f6e1e96b578","kubernetes.io/config.seen":"2019-07-13T18:15:22.5379551Z","kubernetes.io/config.source":"file"}},"spec":{"volumes":[{"name":"ca-certs","hostPath":{"path":"/e
tc/ssl/certs","type":"DirectoryOrCreate"}},{"name":"k8s-certs","hostPath":{"path":"/var/lib/minikube/certs/","type":"DirectoryOrCreate"}},{"name":"kubeconfig","hostPath":{"path":"/etc/kubernetes/controller-manag
er.conf","type":"FileOrCreate" [truncated 3205 chars]
I0723 14:19:31.557358   25204 kubernetes.go:136] Found 1 Pods for label selector component=kube-controller-manager
 dnsI0723 14:19:31.558357   25204 kubernetes.go:125] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ...
I0723 14:19:31.558357   25204 round_trippers.go:383] GET https://172.17.88.150:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-dns
I0723 14:19:31.559357   25204 round_trippers.go:390] Request Headers:
I0723 14:19:31.559357   25204 round_trippers.go:393]     Accept: application/json, */*
I0723 14:19:31.559357   25204 round_trippers.go:393]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0723 14:19:31.583373   25204 round_trippers.go:408] Response Status: 200 OK in 24 milliseconds
I0723 14:19:31.583373   25204 round_trippers.go:411] Response Headers:
I0723 14:19:31.589359   25204 round_trippers.go:414]     Content-Type: application/json
I0723 14:19:31.589359   25204 round_trippers.go:414]     Date: Tue, 23 Jul 2019 08:49:31 GMT
I0723 14:19:31.590357   25204 request.go:897] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/kube-system/pods","resourceVersion":"74255"},"items":[{"metadata":{"nam
e":"coredns-5c98db65d4-5t8qw","generateName":"coredns-5c98db65d4-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/coredns-5c98db65d4-5t8qw","uid":"4961d461-7e4a-4988-9417-c14671dfa86e"
,"resourceVersion":"71459","creationTimestamp":"2019-07-12T08:31:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5c98db65d4"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"cored
ns-5c98db65d4","uid":"a847b7f2-5011-465a-b6d2-6e0be296832c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns","items":[{"key":"Corefile","pat
h":"Corefile"}],"defaultMode":420}},{"name":"coredns-token-65pqz","secret":{"secretName":"coredns-token-65pqz","defaultMode":420}}],"containers":[{"name":"coredns","image":"k8s.gcr.io/coredns:1.3.1","args":["-co
nf","/etc/coredns/Corefile"]," [truncated 6803 chars]
I0723 14:19:31.594356   25204 kubernetes.go:136] Found 2 Pods for label selector k8s-app=kube-dns

* Done! kubectl is now configured to use minikube
@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 23, 2019

Basically we are missing "start" and "stop" commands, we really only have create and shutdown...
A long-term feature is to put the VM in sleep, a more short-term is to add start/stop to bootstrapper.

If you disable cache (--cache-images=false) for the second start, does that help anything ?
If it does, we might consider skipping it, like we do for the none driver, if it is already present

EDIT: Loading cached images took 5 seconds, according to the log. (from 14:18:49 --> 14:18:55)
Basically the time is spent booting (and provisioning) the VM, and then running the bootstrapper

@afbjorklund afbjorklund added area/performance Performance related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jul 23, 2019
@afbjorklund
Copy link
Collaborator

See #4184 and #4622

@blueelvis
Copy link
Contributor Author

/assign

@tstromberg tstromberg added the kind/bug Categorizes issue or PR as related to a bug. label Sep 20, 2019
@tstromberg
Copy link
Contributor

This issue appears to be a duplicate of #4184, do you mind if we move the conversation there?

Ths way we can centralize the content relating to the issue. If you feel that this issue is not in fact a duplicate, please re-open it using /reopen. If you have additional information to share, please add it to the new issue.

Thank you for reporting this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/performance Performance related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants