-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apiserver: stopped - connection to the server x:8443 was refused #3649
Comments
Hey @kamilgregorczyk - sorry about minikube not working out for you so far. I suspect that the apiserver is being evicted, due to running out of resources for the footprint you are attempting to deploy, but we don't do a good job of showing it. Do you mind outputting some logs for me, and perhaps steps to replicate? It'd really help us to stabilize this.
and:
Thanks so much for your bug report! |
Same thing happens to me when running
Won't work as the
Returns
When looking further into by
Why do we see evictions with Whats also maybe helpful is
and
Is that maybe the clue? So we run out of storage and the eviction manager starts to evict pods, and the apiserver happens to be one of many? I think that would make sense...somehow... I just don't get why storage is an issue:
I should have at least 4G available for my docker images. |
@tstromberg You might be right, I had istio running. I can't give you logs as I deleted my VM and moved to GCP with my experiments. I did:
and after some time I started getting these dropped connections |
I was able to reproduce the issue. So it seems by default I have in To replicate try this:
The output of
And sometimes like this:
This is when the apiserver got evicted. When then running
|
minikube is constanty crashing. Please give some solutions. |
What was the solution ? |
Had the same happen to me recently, minikube was running with |
I faced the same thing, after restarting one my own pod, minikube apiserver turned to
|
I also faced this issue after I installed istio. I had to delete my minikube and create it again. Fingers crossed it wont happen again. |
After installing , apiserver is not coming up(apiserver: Stopped) prabs@LAPTOP-HQ5LK73I ~ prabs@LAPTOP-HQ5LK73I ~ Find more information at: Basic Commands (Beginner): Basic Commands (Intermediate): resources and label selector Deploy Commands: Cluster Management Commands: Troubleshooting and Debugging Commands: Advanced Commands: Settings Commands: Other Commands: Usage: Use "kubectl --help" for more information about a given command. prabs@LAPTOP-HQ5LK73I ~ Basic Commands: Images Commands: Configuration and Management Commands: Networking and Connectivity Commands: Advanced Commands: Troubleshooting Commands: Other Commands: Use "minikube --help" for more information about a given command. prabs@LAPTOP-HQ5LK73I ~ Basic Commands: Images Commands: Configuration and Management Commands: Networking and Connectivity Commands: Advanced Commands: Troubleshooting Commands: Other Commands: Use "minikube --help" for more information about a given command. prabs@LAPTOP-HQ5LK73I ~
stderr:
X Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: a
prabs@LAPTOP-HQ5LK73I ~ prabs@LAPTOP-HQ5LK73I ~
stderr:
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/sh
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/sh
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/sh
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/co
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/co
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/sec
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/sec
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/sec
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
! unable to fetch logs for: describe nodes prabs@LAPTOP-HQ5LK73I ~ |
@prabsdubey It's good to reference a pastebin link(or others) instead of pasting such large logs in the comment. |
I faced same issue. I just restarted minikube. and issue solved. |
Minikube is constantly reseting/crashing/doing something with kubernetes api server. After running minikube, increasing cores to 8 and ram to 20 gb, installing helm and istio I'm getting random crashes which are super frustrating here are
minikube logs
: https://pastebin.com/fiPNnjz9Environment:
Minikube version: minikube version: v0.33.1
The text was updated successfully, but these errors were encountered: