Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Robot account k8s-cve-robot #3295

Closed
PushkarJ opened this issue Mar 1, 2022 · 7 comments
Closed

Create Robot account k8s-cve-robot #3295

PushkarJ opened this issue Mar 1, 2022 · 7 comments

Comments

@PushkarJ
Copy link
Member

PushkarJ commented Mar 1, 2022

Organization or Repo

kubernetes/sig-security

User affected

No response

Describe the issue

KEP-3203: kubernetes/enhancements#3203

We need a robot that has push access to k/sig-security/sig-security-tooling/feeds/official-cve-feed.json this location on main branch.

@mrbobbytables
Copy link
Member

It took me a bit to catch up on the slack discussions, but my gut is that another bot account is rather heavy. A prow job that tracks PRs with the label and updates a gcs bucket with the information on merge then triggering a webhook to build the site I think would cover all use cases with minimal overhead and avoids having to open additional PRs

@PushkarJ
Copy link
Member Author

PushkarJ commented Mar 2, 2022

Hey @mrbobbytables thank you for reviewing the slack threads :) Sorry that must have been exhausting to read.

Probably should have added this earlier but there is an initial PR for the KEP: kubernetes/enhancements#3204 that is an easier read than all the slack threads. It also explains possible pros and cons on usage of gcs bucket, under alternatives considered.

One reason not mentioned in the PR about gcs-bucket is that when automation fails, updates to JSON blob by a human would be harder (requires knowledge of gcs bucket read/write primitives and needs a credential for read/write) compared to when the same json blob is on a GitHub repo. (I should probably add this in the PR)

Also, at the risk of repetition, I will try to summarize what we expect the robot account to do, in case there is some confusion:

  • Robot account will not open any PRs
  • Robot account will not track any PRs
  • Robot account will push to main branch of k/sig-security
  • Robot account does not need to maintain a fork of k/sig-security

@mrbobbytables
Copy link
Member

mrbobbytables commented Mar 2, 2022

@PushkarJ the r/w permissions are pretty easy to handle with a group defined in the k8s.io repo. I'm still rather hesitant about running another bot that can potentially fail instead of implementing it as part of a job in our CI. We've done it for quite a few things, e.g. the automation of github org updates etc.

EDIT: read the KEP - its still prow running it, jut with a different account. TBH, I still think publishing to a gcs bucket over something with direct write permissions is better, it just seems like an unnecessary step.

@PushkarJ
Copy link
Member Author

PushkarJ commented Mar 2, 2022

@mrbobbytables I think I understand what you are proposing a bit better now :) Thanks for being patient and reviewing the KEP. So seems like the proposed flow would be something like this:

A Periodic Prow job:

  1. Queries Github API for fixed official CVEs
  2. Generates a JSON blob based on the query results
  3. Writes the JSON blob to gcs-bucket
  4. Triggers the k/website build
  5. k/website build pulls the JSON blob from gcs bucket during website rebuild
  6. k/website renders the JSON blob and corresponding table post website build

That does sound simpler for sure than the current flow, but just wanted to confirm. Only unknown for me is to find out if step 5 is feasible, but I can not think of a reason it should not be.

Also, would requesting and managing access to a gcs-bucket look something like this PR: https://github.com/kubernetes/k8s.io/pull/2570/files ?

@PushkarJ
Copy link
Member Author

PushkarJ commented Mar 4, 2022

We are discussing feasibility of the above approach on slack thread here: https://kubernetes.slack.com/archives/C09QZ4DQB/p1646264348784509?thread_ts=1645129435.563709&cid=C09QZ4DQB

@sftim
Copy link
Contributor

sftim commented Mar 4, 2022

Step 5 is possible provided that the GCS bucket is hooked up to serve objects via HTTPS.

@PushkarJ
Copy link
Member Author

PushkarJ commented Mar 8, 2022

I am closing this in favor of using a GCS bucket + Dynamic page generation workflow proposed above which will not need a robot account or a fork to be maintained.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants