Skip to content

Delete unused prometheus volume on cluster down #1863

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Mar 1, 2021
Merged

Conversation

miguelvr
Copy link
Collaborator

@miguelvr miguelvr commented Feb 4, 2021

checklist:

  • run make test and make lint
  • test manually (i.e. build/push all images, restart operator, and re-deploy APIs)

@miguelvr miguelvr requested a review from vishalbollu March 1, 2021 16:56
}
_, _, err = runGCPManagerAccessCommand(uninstallCmd, *accessConfig, nil, nil)
if err != nil {
exit.Error(err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For cluster down, it is generally preferable to have best-effort deletes.

Consider the following scenario: the first cluster-gcp down request fails on the delete cluster step. In the next retry, the cluster-gcp down command fails at the removing the volumes step because the previous cluster down attempt prevented us from getting kubectl config. The user can't retry cluster-gcp down because it fails at an unrelated step.

@@ -97,6 +98,7 @@ func clusterInit() {
addClusterRegionFlag(_clusterDownCmd)
addAWSCredentialsFlags(_clusterDownCmd)
_clusterDownCmd.Flags().BoolVarP(&_flagClusterDisallowPrompt, "yes", "y", false, "skip prompts")
_clusterDownCmd.Flags().BoolVar(&_flagClusterDownKeepVolumes, "keep-volumes", false, "keep cortex provisioned persistent volumes")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This flag might be renamed to --keep-cloud-resources so that it can be used to preserver volumes, buckets, policies and log groups. For now I think it is fine.


function uninstall_grafana() {
kubectl delete statefulset --namespace default grafana >/dev/null
kubectl delete pvc --namespace default grafana-storage >/dev/null
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a scenario where cluster-down fails, it may not be possible to get kube config to delete the volumes via kubectl. We should use cloud specific apis to find and delete volumes without having to rely on kubectl.

Copy link
Contributor

@vishalbollu vishalbollu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The two outstanding comments can be addressed in follow up tickets

@RobertLucian RobertLucian merged commit 66c9545 into master Mar 1, 2021
@RobertLucian RobertLucian deleted the cleanup-pvc branch March 1, 2021 22:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants