Skip to content

Clarify namespace sameness control via optional policy #6748

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

thockin
Copy link
Member

@thockin thockin commented Jul 19, 2022

After many discussions with customers and implementors, I think we need
to clarify that implementations should have an axis of freedom around
implementation-defined control of sameness.

E.g. "Sameness applies to only in ,
and to in ."

This should be ALLOWED but not REQUIRED.

/sig multi-mcluster

@k8s-ci-robot
Copy link
Contributor

@thockin: The label(s) sig/multi-mcluster cannot be applied, because the repository doesn't have them.

In response to this:

After many discussions with customers and implementors, I think we need
to clarify that implementations should have an axis of freedom around
implementation-defined control of sameness.

E.g. "Sameness applies to only in ,
and to in ."

This should be ALLOWED but not REQUIRED.

/sig multi-mcluster

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. sig/multicluster Categorizes an issue or PR as relevant to SIG Multicluster. labels Jul 19, 2022
@thockin
Copy link
Member Author

thockin commented Jul 19, 2022

/sig multi-cluster
/assign lauralorenz

@k8s-ci-robot
Copy link
Contributor

@thockin: The label(s) sig/multi-cluster cannot be applied, because the repository doesn't have them.

In response to this:

/sig multi-cluster
/assign lauralorenz

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 17, 2022
@thockin
Copy link
Member Author

thockin commented Oct 23, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2023
@thockin
Copy link
Member Author

thockin commented Jan 21, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2023
Copy link
Contributor

@lauralorenz lauralorenz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👋 👋 this came up in slack recently

or D, and bar-team's namespaces cannot be used from clusters A or B.

As an implementation choice, the authority may offers controls which govern
which cluster-namespaces are considered for sameness. For example, it could
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
which cluster-namespaces are considered for sameness. For example, it could
which cluster-namespaces are considered for which aspects of sameness. For example, it could

Overall the other two comments in this section I think benefit from clarification that the policy could establish this at a level of granularity at least as far as per-namespace, per "aspect" (like namespace creation, MCS, RBAC sync). And if there is any limit to how far that granularity could go (like if it goes so far that you can express that policy on subsets of RBAC, then does that defeat the purpose of namespace sameness?), that we should specify that here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PTAL at next push

Copy link
Member

@mikemorris mikemorris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Namespace opt-out (explicit cluster-local namespaces, such as for the cluster-local metrics service example) feels reasonable, particularly if it would prevent ServiceExports in that namespace from functioning (could set a status condition to indicate the target service will not be exported due to policy).

However, the example scenario provided for authority sameness policy is significantly different, and seems to attempt defining multiple separate sameness groups within a set of clusters, and leaves it unclear whether any service networking between the groups is expected to be possible.

@@ -92,10 +96,29 @@ thus namespce sameness applies), the implementation of multi-cluster services
probably should not be on-by-default. The details of how multi-cluster
services will work are an area for innovation, but ideas include:
* Opt-in: services must be "exported" to be merged across clusters
* Opt-out: services or namespace can be opted out of service merging
* Opt-out: services or namespaces can be opted out of sameness
* Different discovery: merged services and "raw" services use different names
or other discovery mechanisms
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this section could benefit from a larger update to reflect concrete implementations such as ServiceExport, ServiceImport and .clusterset.local DNS.

Do any implementations actually support a service opt-out pattern, where a default or configured behavior is for all services in a given namespace to global/merged/same, but some could be reserved/un-exported, or perhaps more confusingly, exported but "not-same"?

"Export all services in namespace" feels like a reasonable option, and if more granularity is needed, following a service opt-in pattern with manual ServiceExports definitions instead or moving the cluster-local services to a local namespace feels preferable and less confusing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but some could be reserved/un-exported, or perhaps more confusingly, exported but "not-same"?

I understand namespace sameness to allow for only the first, not the second (I would prefer the line here should say what is being opted out of is, as originally, "service merging", or even better "exporting", but definitely not as proposed here "sameness"). Most (all?) MCS implementations today are opt-in so implementation wise opt-out is only its corollary: by not having ServiceExports made in 1-N clusters with the same named namespace/service in the first place or removing them. Istio et al probably have more to say on opt out patterns of behavior since those implementations are more on-by-default.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll update. At the time this was written, MCS semantics had not really been pinned down. That said, I like it as an example, but I don't want to inline that spec here.

Will try to balance.

Comment on lines 109 to 113
Consider the same organization from previous examples, but with more clusters:
A, B, C, and D. They want to assign foo-team's namespaces to clusters A and B,
and bar-team's namespaces to clusters C and D. They also want to ensure that
the opposite is never true - foo-team's namespaces cannot be used in clusters C
or D, and bar-team's namespaces cannot be used from clusters A or B.
Copy link
Member

@mikemorris mikemorris Mar 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This specific example feels like it doesn't fall under the scope defined in this document.

the set of all clusters, governed by a single authority, that are expected to work together

I'd contend that what is being described is actually two independent sets of clusters, clusterset-foo consisting of A and B, and clusterset-bar with clusters C and D, and what is needed for this scenario is not a complex "sameness" configuration, but rather introducing a pattern for "cross-clusterset" service networking, where sameness is not assumed outside the clusterset boundary.

"Cross-clusterset" service networking would enable patterns like foo-team exporting an api service to clusterset-bar, where bar-team could route traffic from their web frontend to the imported api backend from clusterset-foo (a ServiceImport may not be able to be created automatically by a multicluster controller though, as it would be ambiguous in which namespace it belongs without additional configuration).

Consul service mesh currently offers similar functionality through our cluster peering feature between individual clusters, and we would be interested in helping standardize a pattern for this at the clusterset abstraction layer. The desire for a cross-clusterset service networking model was also described by @srampal during a presentation in recent SIG-Multicluster and Gateway API GAMMA meetings.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From speaking to lots of people about this, I think it's unfortunately more complicated than that. It's not always perfectly-disjoint or perfectly-overlapping.

I'm wary of demanding perfect-sameness (I tried, believe me :) because it ends up with a large number of clustersets, each with a small number of clusters, which reduces to the same management-at-sclae problems as before. We then have to define sets of sets, and likely sets of sets of sets. It's UX hell all the way down.

The compromise that seems to make the most sense to people is some sort of intra-set selection - explicit or by metadata or by convention - which governs and in-group and an out-group for each question-under-consideration. All I want to do with this PR is make sure we allow for implementations to try things without running afoul of the letter of the law. I am NOT trying to describe an API or even a required semantic.

I do think that cross-clusterset stuff is needed, especially in networking land, but I don't think it is this problem.

Comment on lines 119 to 120
D". If a "foo-something" namespace is found in cluster C it would _not_ be
considered for sameness. The details of how these policies might be expressed
Copy link
Member

@mikemorris mikemorris Mar 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't feel workable for an MCS API implementation.

If no Services in a foo-something namespace are intended to be exported for consumption by Services in bar-something namespaces, then these sets of clusters are fully independent and are not "expected to work together".

If at least one Service in a foo-something namespace is intended to be consumed from Services managed by bar-team, then:

  • Preventing creation of the foo-something namespace prevents placing any ServiceImport in it to route to a Service in foo-something on clusters A or B managed by foo-team - services wouldn't be able to be shared between teams.
  • Allowing a foo-something namespace to exist in cluster C but excluding all foo-* prefixed namespaces from "sameness" by policy would have one of the following effects:
    • Prevent the creation of a ServiceImport because the namespace is not the same - this isn't really practical because it prevents any service network between the teams.
    • Allow the creation of only ServiceImport resources in the foo-something namespace - this effectively makes all foo-* namespaces on clusters C and D only usable for routing to foo-team services on clusters A or B - this is kinda the best case scenario, but has the downside of requiring global namespace management and complex RBAC/sameness policy.
    • Allow the creation of ServiceImport and cluster-local Service or other resources, but not consider a ServiceImport and Service with the same name in the foo-something namespace to logically identical or fungible - this could lead to significant user confusion if app.foo-something.svc.clusterset.local and app.foo-something.svc.cluster.local were two entirely different applications.

Copy link
Member Author

@thockin thockin Apr 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not 100% sure I caught all that you are saying here, so correct me if I miss the point, please. Note, my next push changes the scenario a bit WRT A, B, C, D and foo, bar, but I'll answer here with the scenario as written :)

Clusters can be "expected to work together" even if some namespaces in them are "private" to the cluster. As an example, consider kube-system. Every cluster has one, but cross-cluster "sameness" is dubious. If you consider a clusterset as a management domain, and you buy the argument that multi-cluster and cross-cluster capabilities will continue to grow, then there's clearly value in having a smaller number of domains, even if any given capability may or may not apply to a given cluster. IOW, some clusters will use MCS. Some will use MC-Ingress. Some will be CD targets. Some will be build clusters. Some will be dedicated. Some will use gitops. I don't think we can or should demand homegeneity here.

Preventing creation of the foo-something namespace prevents placing any ServiceImport in it

Correct. I'm trying to thread a needle and allow the implementation of MCS to decide between (for example):

  1. all namespaces in all clusters are mergeable (what this describes today)
  2. excluded namespaces in a cluster can't have imports or exports (not sure it's useful, but should be allowed?)
  3. excluded namespaces in a cluster can have a ServiceImport (e.g. in C/foo-something with endpoints from A and B) but cannot have a ServiceExport (even if we might find one in C). In effect, clusters C and D can consume foo-something services but not produce them.

Allow the creation of only ServiceImport resources in the foo-something namespace - this effectively makes all foo-* namespaces on clusters C and D only usable for routing to foo-team services on clusters A or B - this is kinda the best case scenario, but has the downside of requiring global namespace management and complex RBAC/sameness policy.

I don't see the global namespace management as a "problem" really. Or rather, it's a problem we already have, so it's not worse in this case.

Allow the creation of ServiceImport and cluster-local Service or other resources, but not consider a ServiceImport and Service with the same name in the foo-something namespace to logically identical or fungible

I think the assertion is closer to your 2nd point ("the best case scenario") - there SHOULD NOT be foo-something cluster-local Services or other resources in cluster C. But on the off chance that we find them (e.g. cluster was compromised or somehow fat-fingered) we will NOT engage them because the policy says they should not be there.

As I said somewhere else - what I am trying to do here is allow implementations the freedom to explore. I think different situations will need different options, and I found the language in here to be overly rigid. I am not saying implementations MUST do any of this - just that they MAY.

After many discussions with customers and implementors, I think we need
to clarify that implementations DO have an axis of freedom around
implementation-defined control of sameness.

E.g. "Sameness applies to <this namespace> only in <these clusters>,
and to <that namespace> in <those clusters>."
@thockin thockin force-pushed the sameness-impl-policy branch from 1a2bc35 to 06c5f20 Compare April 22, 2023 21:49
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: thockin
Once this PR has been reviewed and has the lgtm label, please assign jeremyot for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

merged across clusters, even if the LDAP-to-RBAC sync from example 2 still
applies consistent RBAC policies. Those are independent capabilities.

### Example 4: Authority sameness policy
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment thread got lost on the push:

#6748 (comment)

namespaces cannot be used from clusters C or D, and billing-team's namespaces
cannot be used from clusters A or B.

What exactly it means to "be used" depends on each specific multi-cluster
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment thread got lost on the push:

#6748 (comment)

@mikemorris
Copy link
Member

mikemorris commented Jul 5, 2023

@thockin Sorry for taking so long to follow up on this - I'd like to share a bit more context motivating my concerns.

Very much agreed that enforcing "perfect" sameness or disjointedness can be quite difficult or not pragmatic in many contexts - the concern I have about this proposal is coming from the perspective of MCS API consumers, particularly service meshes or third-party cluster management projects. Allowing MCS API providers to loosen these guarantees in an implementation-specific manner without introducing a standard API to express where and how namespace sameness does or does not apply would make it significantly more difficult to build additional functionality on top of the MCS primitives, or to offer a multi-cloud MCS API implementation.

This is definitely a real adoption challenge, but I think proposing a concrete experimental API for addressing this would be a be a better alternative than removing this guarantee entirely. Would this perhaps be a topic worth adding to the agenda for a future SIG-Multicluster meeting?

@thockin
Copy link
Member Author

thockin commented Jul 17, 2023

Hi @mikemorris,

the concern I have about this proposal is coming from the perspective of MCS API consumers, particularly service meshes or third-party cluster management projects

ACK this. The problem I see here is very similar to the "inventory" problem - defining a control-plane API in terms of Kubernetes resources implies a cluster somewhere, which a) is a lot of overhead; b) has all the downsides of a cluster (including SPOF).

We can't assume that there's a single API endpoint for all clusters (could be a regional replica) or that the "localcluster" is the source of truth or that it is not. All of those are viable models with real tradeoffs.

So given a (mostly hypothetical, sadly) project that wants to do something interesting across a clusterset, what API would satisfy it? What API would satisfy them all?

I've mostly seen it going the other way - some "bridge actor" which is aware of the clusterset source-of-truth extracts the requisite information and pre-cooks it into something the (somewhat less hypothetical) project can consume. This is complicated and won't scale to hundreds of such projects, but it does let them all do their own thing, without us defining the contraints of how they work or what they are allowed to know. I am not sure this is ideal in the long term, but I am afraid we don't know enough (yet?) to do better.

How many such projects exist and have their own notion of config? For example:

https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd_cluster_add/

https://ranchermanager.docs.rancher.com/v2.5/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters

https://istio.io/latest/docs/setup/install/multicluster/multi-primary/

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 19, 2024
@k8s-ci-robot k8s-ci-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 19, 2024
@thockin
Copy link
Member Author

thockin commented Feb 20, 2024

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 20, 2024
@thockin
Copy link
Member Author

thockin commented May 20, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 18, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 17, 2024
@lauralorenz
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 23, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2024
@thockin
Copy link
Member Author

thockin commented Dec 25, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 25, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 25, 2025
@thockin
Copy link
Member Author

thockin commented Mar 26, 2025

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. sig/multicluster Categorizes an issue or PR as relevant to SIG Multicluster. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

5 participants