-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-2539: Addressing comments from #2540 #2553
KEP-2539: Addressing comments from #2540 #2553
Conversation
cb86427
to
a133403
Compare
a133403
to
988400a
Compare
988400a
to
40f842e
Compare
40f842e
to
b11b75f
Compare
ref: #2539 |
/sig release We are getting close to the point where we'd like to announce some process changes this enables (less toil, more community empowerment) In order to get this to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few questions...
Around graduation, it seems that if we turn this on, it's functionally a "straight to prod/GA" scenario.
Is there a scenario/trial period where we could do a "canary" prow cluster that gets the new images immediately and repos/jobs/subprojects could opt-in to test running on that/those cluster(s)? (I'd be fine w/ having the RelEng + enhancements repos trial)
What would be the implementation timeline?
- Announce this change before enabling Alpha phase on #prow and #testing-ops | ||
channel on Slack |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be announced widely to k-dev.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sound good, updated to also include emailing to kubernetes-dev@googlegroups.com
cc: @kubernetes/release-engineering |
Having two prows handle the same org is very tricky, and part of what's making it difficult to plan migration from prow.k8s.io in google.com to a prow instance in kubernetes.io. So I would say probably not for prow, as part of this KEP, but we are looking into whether something like this is a feasible approach for migrating prow.k8s.io to community. But for jobs! that's part of the changes we're proposing here... break out the monolithic bump-all-jobs-and-prow PR into distinct PRs for prow (still owned by test-infra-oncall), and for jobs (and hand ownership of these merges to... also auto-merging?). If we wanted to get fancy with bumping different paths or different images at different cadences, we could. But the goal is... that complexity (if desired) shouldn't be test-infra-oncall's concern. As far as "run on cluster"... this KEP is independent of what gets bumped in k8s-infra-prow-build and how. I do plan on setting up autobump for it, but the components currently impacted would be greenhouse and boskos (e.g. this was a manual run I did kubernetes/k8s.io#1740). Both update on a much slower cadence. |
@justaugustus are you acting as SIG Release approver for this? or are you suggesting someone from @kubernetes/release-engineering should be?
I'm going to defer to @chaodaiG for that, but I think we're capable of landing this before test freeze. As a reminder, we have never frozen test-infra for release freezes, and nobody has provided compelling evidence for why we should (ref: kubernetes/sig-release#907). But we are absolutely not interested in causing churn while contributors need to remain focused |
With the addition of a loud announcement about this and clarification of when in a cycle to do this, I'm happy to approve for SIG Release. Just tagging the team for eyes on.
Agreed that this has seemed to not be required in the past, but I'll let others chime in as well. |
b259885
to
def3503
Compare
Thank you! Updated the KEP with a loud announcement, as well as to make you the approver from SIG Release.
The real implementation of alpha phase is very trivial, could be done in an hour, so it's very flexible. Totally agree with @spiffxp that we don't want to cause any churn in test. @justaugustus what time would you think is better? |
@chaodaiG -- If the work is ready now, right after we cut the branch for v1.21 probably works! |
Awesome! And yes I believe it's ready, just created all the PRs for implementing alpha phase:
Will announce this change on k8s-dev right after v1.21 branch is cut |
@justaugustus , @spiffxp , now that v1.21 branch was cut, I'm preparing to announce it. Quick question, what should happen first?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, update kep.yaml with status: provisional
and I'm ready to /approve
Is the intent to announce this as alpha or beta?
b7d02af
to
5809ef0
Compare
The initial implementation is low frequency deploy at every 6 hours, which qualifies as alpha phase, so will announce as alpha |
5809ef0
to
70e9e00
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: chaodaiG, saschagrunert, spiffxp The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is updating KEP-2539 based on comments from #2540