Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workflow issue using store persistent data in AWS S3 #11

Closed
Cryptophobia opened this issue Mar 20, 2018 · 6 comments
Closed

Workflow issue using store persistent data in AWS S3 #11

Cryptophobia opened this issue Mar 20, 2018 · 6 comments

Comments

@Cryptophobia
Copy link
Member

From @IulianParaian on July 9, 2017 18:44

I tried to setup workflow using store persistent data in AWS S3.
I followed the steps from here
This is how the custom values.yaml file looks like

global:
  # Set the storage backend
  storage: s3

s3:
  # Your AWS access key. Leave it empty if you want to use IAM credentials.
  accesskey: "xxxx"
  # Your AWS secret key. Leave it empty if you want to use IAM credentials.
  secretkey: "xxxx"
  # Any S3 region
  region: "xx"
  # Your buckets.
  registry_bucket: "registry-xxxx"
  database_bucket: "database-xxxx"
  builder_bucket: "builder-xxxx"

After installing Workflow
helm install deis/workflow --namespace deis -f values.yaml
the deis-controller pod is not starting and in logs I'mm getting:

system information:
Django Version: 1.11.3
Python 3.5.2
 Django checks:
System check identified no issues (2 silenced).
 Health Checks:
Checking if database is alive
There was a problem connecting to the database
FATAL:  password authentication failed for user "xxxxx"

Copied from original issue: deis/workflow#839

@kingdonb
Copy link
Member

This looks like it can be closed, misconfiguration or other user-specific issue. (Can't repro)

@Cryptophobia
Copy link
Member Author

Hmm... this doesn't even look like a problem with S3 but with the connection to the postgresql database. I am going to close now. Thanks @kingdonb 👍

@kingdonb
Copy link
Member

This is what it looked like when I pointed a new Workflow installation (with new secrets) at an existing cluster's database backup, FWIW.

@Cryptophobia
Copy link
Member Author

@kingdonb , did you use same username and password to access the database? If so, it should have connected to the existing database, but maybe the postgresql database secret might have been misconfigured...

@kingdonb
Copy link
Member

kingdonb commented Oct 31, 2018

No, the issue was that the secrets were new, but the postgres database already had a password set.

If I had not done helm delete --purge then I think my secrets would have been intact, and the hooks would not have run to generate new secrets. But I think I was actually on a new cluster, with some old storage credentials (for a storage that was still intact, I believe I actually had to kubectl exec into the database in order to reset the password and bring the cluster up successfully)

This actually happened on the cluster that runs https://versions.teamhephy.info

It was recoverable and no data was lost! So long as you have full access to the cluster and the storage account is also intact.

@Cryptophobia
Copy link
Member Author

Great to know all this above for issue tracking and possible problems. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants