-
Notifications
You must be signed in to change notification settings - Fork 31
controller-manager is NOT AVAILABLE #221
Comments
I'm seeing the same problem. To debug, created a new cluster, but did not enable VPC-native this time. This time the controller-manager worked. So there is some sort of permission or connectivity issue between the controller-manager and the kubernetes master. |
My cluster that failed |
Hello,
This is a known issue with the resource quota controller. According to
Walter Fender (wfender@):
The latest bug here is that the RecourceQuotaController is returning an
error when it gets a 5XX results from a discovery request. The error
returned from the controller is (correctly) not handled by the Controller
Manager. The fix here is to make the ResourceQuotaContoller be resilient to
a 5XX result from discovery.
Both the 1.10 cherry pick (
kubernetes/kubernetes#67155
<https://www.google.com/url?q=https://github.com/kubernetes/kubernetes/pull/67155&sa=D&usg=AFQjCNF2XQc1IvEENvVibMsvxISlG2PDbA>)
and the 1.11 cherry pick (
kubernetes/kubernetes#67154
<https://www.google.com/url?q=https://github.com/kubernetes/kubernetes/pull/67154&sa=D&usg=AFQjCNFR3KNWPPIMeLkJ2P847Ci3FwxovA>)
have merged.
So in order to fix this issue, update your cluster to 1.10.8 or 1.11.3.
These patches were released months ago.
Sean
…On Sat, Jan 5, 2019 at 8:36 PM Spring_MT ***@***.***> wrote:
My cluster that failed sc install enabled VPC-native too.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#221 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHC4GsaGsLr9HXKcF6JeAUzIDgJKz83rks5vAX1RgaJpZM4Zgzxi>
.
|
I'm using |
Got it. Looking into this now. Would it be possible to get a hash of the
cluster this is failing on, so we can debug it?
Thanks,
Sean
…On Mon, Jan 7, 2019 at 2:50 AM Spring_MT ***@***.***> wrote:
I'm using 1.11.5-gke.4, however the issue happened 😢 .
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#221 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHC4GllWwTtlp8pvH8yB7ocMuBsoAd3Iks5vAyZrgaJpZM4Zgzxi>
.
|
I was running 1.11.5-gke.5 (I've since rebuilt the cluster to turn off vpc). |
@kibbles-n-bytes @martinmaly Do we know what version of the controller manager the service catalog is now using? Do we know which image version of the controller manager the "sc" tool is installing? It looks like #66932: Include unavailable API services in discovery response What do you guys think? |
The version sc is installing is The crashloop does appear to be fixed by using a newer version. Steps I took to test:
|
I have what the OP issue describes occurring with a fresh install in |
Rebuilt with a different cluster version, |
Okay I created a different cluster (NOT VPC-native), and it succeeds. Seems like I have the issue as described above, I will try using a newer version of the service catalog as @jo2y suggests. |
Same problem with VPC cluster |
I managed to make it work by adding a clusterrole and changing the image to the one mentioned by @jo2y (thank you)
however, after that
|
When
sc install
is executed, controller-manager is not AVAILABLE.The controller-manager pod is CrashLoopBackOff.
Error message is below.
I'm using GKE and execute the tutorial for installing service catalog.
https://cloud.google.com/kubernetes-engine/docs/how-to/add-on/service-catalog/install-service-catalog
sc version
kubernete version
The text was updated successfully, but these errors were encountered: