-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to push some refs #14
Comments
From @dumyan on June 14, 2017 16:38 The push is rejected because of the In short, you should inspect |
From @Overdrivr on June 15, 2017 6:1 Thanks for the reply. The registry pod if filled with info messages, is this the default behavior ?
The deis-builder log is maybe more interesting
Any idea what is causing the handshake to fail ? |
From @bacongobbler on June 15, 2017 6:2 This handshake failures are just the healthcheck probe. It's a red herring. What you want to be looking at is the registry logs as that's where the 500 error came from. |
From @Overdrivr on June 15, 2017 6:30 Here you go, registry logs with HTTP 500 error
|
From @bacongobbler on June 15, 2017 6:37 Sounds like it's having a hard time uploading images to the backing storage driver as it's barking during PUT operations. I'd check your object storage logs and see if it's crashing, or perhaps your cluster is misconfigured and the registry is unable to upload blobs to object storage. |
From @Overdrivr on June 15, 2017 6:51
It's weird that minio is using region us-east-1, because my cluster is located in europe-west1-b. Could this be the problem ? |
From @bacongobbler on June 15, 2017 14:11 Minio fakes out and simulates that it's in us-east-1 to mimic the S3 API. It doesn't accurately reflect where it's deployed (in the cluster). |
From @EamonZhang on July 4, 2017 7:45 +1 |
From @wikkid on August 7, 2017 15:12 Any updates on this? Got exactly the same problem. |
From @robeferre on August 8, 2017 16:55 Hi Guys any update on this Im having the same issue |
From @Overdrivr on August 9, 2017 4:25 I wasn't sure how to debug this further, so no updates so far. I wasn't seeing anything in my object storage logs. |
From @galbacarys on August 18, 2017 16:45 Having watched this carefully (I know, great way to debug things) I noticed something interesting-It looks like some of the minio requests are going through, but then the connection gets dropped somewhere (the object is too big?) and minio barfs and stops taking any more data.
|
From @dumyan on August 18, 2017 21:59 Hey, guys, you can try with a newer minio image - just edit the minio deployment. Rereading this made me vaguely remember that I had some issues with the minio version that deis deploys by default. |
From @battlemidget on August 19, 2017 20:16 @dumyan What new minio image are you referring to? |
From @dumyan on August 20, 2017 12:10 @battlemidget the OP is using minio as a storage backend for the registry. The default image for |
From @battlemidget on August 20, 2017 20:21 @dumyan thanks im hitting this same issue, ill give that a try |
From @battlemidget on August 21, 2017 17:27 @dumyan Ok, so not exactly the same issue as this one it seems. I am using off cluster storage (s3) and our distribution of kubernetes uses flannel cni so I just needed to enable that. FYI, we are heavily promoting Deis workflow as part of our core addons for our Canonical Distribution of Kubernetes, my first blog post on it is here http://blog.astokes.org/conjure-up-dev-summary-aws-cloud-native-integration-and-vsphere-3/#addons, and the application is https://conjure-up.io. cc @bacongobbler Thanks for the help! |
From @bacongobbler on August 21, 2017 18:52 @battlemidget you may want to read our latest blog post. https://deis.com/blog/2017/deis-workflow-final-release/ |
From @battlemidget on August 21, 2017 18:58 @bacongobbler well that is disappointing :( |
From @Overdrivr on June 14, 2017 6:39
I'm trying to deploy from Gitlab CI to Deis using buildpack ( I cannot use Docker-based because of #823 )
Authentication to deis from gitlab-ci runner works fine, code is built successfully. But at the end of the deployment, I get the following error message.
My cluster is GKE 3 node n1-standard1, with 50Go of storage each.
All pods are properly running
How can I debug this ?
Copied from original issue: deis/workflow#826
The text was updated successfully, but these errors were encountered: