-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workflow issue using store persistent data in AWS S3 #11
Comments
This looks like it can be closed, misconfiguration or other user-specific issue. (Can't repro) |
Hmm... this doesn't even look like a problem with S3 but with the connection to the postgresql database. I am going to close now. Thanks @kingdonb 👍 |
This is what it looked like when I pointed a new Workflow installation (with new secrets) at an existing cluster's database backup, FWIW. |
@kingdonb , did you use same username and password to access the database? If so, it should have connected to the existing database, but maybe the postgresql database secret might have been misconfigured... |
No, the issue was that the secrets were new, but the postgres database already had a password set. If I had not done This actually happened on the cluster that runs https://versions.teamhephy.info It was recoverable and no data was lost! So long as you have full access to the cluster and the storage account is also intact. |
Great to know all this above for issue tracking and possible problems. 👍 |
From @IulianParaian on July 9, 2017 18:44
I tried to setup workflow using store persistent data in AWS S3.
I followed the steps from here
This is how the custom values.yaml file looks like
After installing Workflow
helm install deis/workflow --namespace deis -f values.yaml
the deis-controller pod is not starting and in logs I'mm getting:
Copied from original issue: deis/workflow#839
The text was updated successfully, but these errors were encountered: