Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

audit followup: ensure secretmanager secrets and bindings are managed by scripts #1731

Open
spiffxp opened this issue Feb 26, 2021 · 14 comments
Assignees
Labels
area/access Define who has access to what via IAM bindings, role bindings, policy, etc. area/prow Setting up or working with prow in general, prow.k8s.io, prow build clusters area/release-eng Issues or PRs related to the Release Engineering subproject kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra.

Comments

@spiffxp
Copy link
Member

spiffxp commented Feb 26, 2021

Caught via #1718 (comment)

I've done the following pattern a few times now (most recently #1696 (comment)):

  • someone needs to pass a secret to me, wonders how to transfer it
  • I setup a secretmanager secret in the relevant project
    • intended to be k8s secrets in a cluster: the project hosting the cluster
      • kubernetes-public for apps running on aaa
      • k8s-infra-prow-build-trusted for prow jobs or components runnong on prow-build-trusted
      • etc.
  • Setup appropriate permissions on the secret (need to figure out a consistent pattern here)
    • Org admins should implicitly get access via their role/owner on the organization
    • Give some sort of "oncall" team ownership of the secret and its versions for break-glass cases
    • Give whomever is handing the secret to us write access to the secret version
  • Setup labels on the secret
    • app: foo if its for an app in the foo dir running on aaa
    • group: sig-foo if the app is owned by sig-foo
    • ??? maybe the namespace the secret is destined for?
  • Request that for things destined to be k8s secrets, people store an actual secret manifest yaml in the secret

Put it all together and this allows for relatively simple / safe deployment: https://github.com/kubernetes/k8s.io/tree/main/slack-infra#how-to-deploy

Problems with the above:

  • it needs to be documented instead of me just happening to kinda/sorta do something consistent
  • the secret creation should be scripted
@spiffxp
Copy link
Member Author

spiffxp commented Feb 27, 2021

/kind cleanup
/kind documentation
/priority important-soon
/area cluster-mgmt
/area access

Labels and areas for relevant sigs:

/sig testing
/area prow
I'd like to set this up as the standard for managing secrets for prow.k8s.io (with possible exception of kubeconfigs that speak to google-internal clusters)

/sig release
/area release-eng
triage-party, publishing-bot, and possibly other secrets

/sig contributor-experience
slack-infra, groups management

/sig scalability
perf-dash

/sig node
node-perf-dash

@k8s-ci-robot k8s-ci-robot added kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/documentation Categorizes issue or PR as related to documentation. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. area/cluster-mgmt area/access Define who has access to what via IAM bindings, role bindings, policy, etc. sig/testing Categorizes an issue or PR as relevant to SIG Testing. area/prow Setting up or working with prow in general, prow.k8s.io, prow build clusters sig/release Categorizes an issue or PR as relevant to SIG Release. area/release-eng Issues or PRs related to the Release Engineering subproject sig/contributor-experience Categorizes an issue or PR as relevant to SIG Contributor Experience. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. sig/node Categorizes an issue or PR as relevant to SIG Node. labels Feb 27, 2021
@spiffxp
Copy link
Member Author

spiffxp commented Mar 3, 2021

Another piece of followup: I setup a custom secretLister role (ref: #1726) which should allow someone to use the GCP console to manage secrets. It should work such that adding it to a group means they'll be able to list/see secrets, but only those they directly have access to. Need to trial this with someone (maybe @jeefy)

@jeefy
Copy link
Member

jeefy commented Mar 3, 2021

Happy to help! Just ping. :)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 7, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 7, 2021
@spiffxp
Copy link
Member Author

spiffxp commented Jul 8, 2021

/remove-lifecycle rotten
/assign
This might be done, or at the very least the description needs more a refresh. A quick short update:

@spiffxp
Copy link
Member Author

spiffxp commented Oct 1, 2021

I'd like to add to definition of done that secrets are provisioned the same way for all of our clusters. As of completion of #2220 all prow build cluster secrets are provisioned via terraform. But the secrets for aaa are still handled by bash.

@spiffxp
Copy link
Member Author

spiffxp commented Nov 3, 2021

Once #3028 merges, all of our secrets will be managed by terraform. What then remains is docs on how to provision a new secret.

The age-old question of "where is the best place for these?" and my best guesses for answers:

  • prow: in the respective README's in infra/gcp/terraform/k8s-infra-prow-build and/or somewhere in kubernetes/test-infra
  • apps: in "running on community cluster"

@spiffxp
Copy link
Member Author

spiffxp commented Nov 17, 2021

I'll drop the respective sig labels for each app since they've been migrated and this is more on sig-k8s-infra to doc now

/remove-sig node
/remove-sig release
/remove-sig scalability
/remove-sig testing
/remove-sig contributor-experience

@k8s-ci-robot k8s-ci-robot removed sig/node Categorizes an issue or PR as relevant to SIG Node. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. sig/testing Categorizes an issue or PR as relevant to SIG Testing. sig/contributor-experience Categorizes an issue or PR as relevant to SIG Contributor Experience. labels Nov 17, 2021
@ameukam
Copy link
Member

ameukam commented Dec 6, 2021

/milestone v1.24

@k8s-ci-robot k8s-ci-robot modified the milestones: v1.23, v1.24 Dec 6, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 6, 2022
@ameukam
Copy link
Member

ameukam commented Mar 7, 2022

/remove-lifecycle stale
/milestone clear
/priority backlog
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added priority/backlog Higher priority than priority/awaiting-more-evidence. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 7, 2022
@k8s-ci-robot k8s-ci-robot removed this from the v1.24 milestone Mar 7, 2022
@ameukam ameukam removed the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Mar 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/access Define who has access to what via IAM bindings, role bindings, policy, etc. area/prow Setting up or working with prow in general, prow.k8s.io, prow build clusters area/release-eng Issues or PRs related to the Release Engineering subproject kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra.
Projects
Status: Backlog
Development

No branches or pull requests

6 participants