You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 30, 2021. It is now read-only.
I realise this project is not being actively maintained anymore, but figured it was worth asking here as I'm really stumped.
Through some fat fingering an invalid command parameter got into the deployment for a worker process in a Deis app we've got running. Now whenever we run a deis pull for a new image this broken parameter gets passed to the deployment so the worker doesn't start up successfully.
If I go into kubectl I can see the following parameter being set in the deployment for the worker (json path /spec/template/spec/containers/0)
"command": [
"/bin/bash",
"-c"
],
Which results in the pod not starting up properly:
Error: failed to start container "worker": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory"
Error syncing pod
Back-off restarting failed container
This means that for every release/pull I've been going in and manually removing that parameter from the worker deployment setup. I've run kubectl delete deployment and recreated it with valid json (kubectl create -f deployment.json). This fixes things until I run deis pull again, at which point the broken parameter is back.
My thinking is that that broken command parameter is persisted somewhere in the deis database or the like and that it's being reset when I run deis pull.
I've tried the troubleshooting guide and dug around in the deis-database but I can't find where the deployment for the worker process is being created or where the deployment parameters that get passed to kubernetes when you run a deis pull come from.
Any help appreciated. Thank you!
Edit:
Running deis v2.10.0 on Google Cloud
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I realise this project is not being actively maintained anymore, but figured it was worth asking here as I'm really stumped.
Through some fat fingering an invalid
command
parameter got into the deployment for a worker process in a Deis app we've got running. Now whenever we run adeis pull
for a new image this broken parameter gets passed to the deployment so the worker doesn't start up successfully.If I go into kubectl I can see the following parameter being set in the deployment for the worker (json path
/spec/template/spec/containers/0
)Which results in the pod not starting up properly:
This means that for every release/pull I've been going in and manually removing that parameter from the worker deployment setup. I've run
kubectl delete deployment
and recreated it with valid json (kubectl create -f deployment.json
). This fixes things until I rundeis pull
again, at which point the broken parameter is back.My thinking is that that broken
command
parameter is persisted somewhere in the deis database or the like and that it's being reset when I rundeis pull
.I've tried the troubleshooting guide and dug around in the
deis-database
but I can't find where the deployment for the worker process is being created or where the deployment parameters that get passed to kubernetes when you run adeis pull
come from.Any help appreciated. Thank you!
Edit:
Running deis v2.10.0 on Google Cloud
The text was updated successfully, but these errors were encountered: