-
Notifications
You must be signed in to change notification settings - Fork 711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create Robot account k8s-cve-robot #3295
Comments
It took me a bit to catch up on the slack discussions, but my gut is that another bot account is rather heavy. A prow job that tracks PRs with the label and updates a gcs bucket with the information on merge then triggering a webhook to build the site I think would cover all use cases with minimal overhead and avoids having to open additional PRs |
Hey @mrbobbytables thank you for reviewing the slack threads :) Sorry that must have been exhausting to read. Probably should have added this earlier but there is an initial PR for the KEP: kubernetes/enhancements#3204 that is an easier read than all the slack threads. It also explains possible pros and cons on usage of gcs bucket, under alternatives considered. One reason not mentioned in the PR about Also, at the risk of repetition, I will try to summarize what we expect the robot account to do, in case there is some confusion:
|
@PushkarJ the r/w permissions are pretty easy to handle with a group defined in the k8s.io repo. I'm still rather hesitant about running another bot that can potentially fail instead of implementing it as part of a job in our CI. We've done it for quite a few things, e.g. the automation of github org updates etc. EDIT: read the KEP - its still prow running it, jut with a different account. TBH, I still think publishing to a gcs bucket over something with direct write permissions is better, it just seems like an unnecessary step. |
@mrbobbytables I think I understand what you are proposing a bit better now :) Thanks for being patient and reviewing the KEP. So seems like the proposed flow would be something like this: A Periodic Prow job:
That does sound simpler for sure than the current flow, but just wanted to confirm. Only unknown for me is to find out if step 5 is feasible, but I can not think of a reason it should not be. Also, would requesting and managing access to a gcs-bucket look something like this PR: https://github.com/kubernetes/k8s.io/pull/2570/files ? |
We are discussing feasibility of the above approach on slack thread here: https://kubernetes.slack.com/archives/C09QZ4DQB/p1646264348784509?thread_ts=1645129435.563709&cid=C09QZ4DQB |
Step 5 is possible provided that the GCS bucket is hooked up to serve objects via HTTPS. |
I am closing this in favor of using a GCS bucket + Dynamic page generation workflow proposed above which will not need a robot account or a fork to be maintained. |
Organization or Repo
kubernetes/sig-security
User affected
No response
Describe the issue
KEP-3203: kubernetes/enhancements#3203
We need a robot that has push access to
k/sig-security/sig-security-tooling/feeds/official-cve-feed.json
this location on main branch.The text was updated successfully, but these errors were encountered: