Skip to content

Add conformance test to verify resolution of conflicting service types #111

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 15, 2025

Conversation

tpantelis
Copy link
Contributor

Also refactored/improved awaitServiceImport in a separate commit.

Fixes #92

We can improve it by utilizing the form of 'Eventually' that
accepts a function that that takes a single Gomega argument
which is used to make assertions. 'Eventually' succeeds only if all
the assertions in the polled function pass. This is simpler than
first polling to find the ServiceImport based on initial checks
then making separate assertions to provide better error output
that typically overlap with the initial checks.

Signed-off-by: Tom Pantelis <tompantelis@gmail.com>
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 23, 2025
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jun 23, 2025
@tpantelis
Copy link
Contributor Author

/cc @skitt @MrFreezeex

@k8s-ci-robot k8s-ci-robot requested review from MrFreezeex and skitt June 23, 2025 14:47
Copy link
Member

@MrFreezeex MrFreezeex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great thanks for the refactoring and the additional test!
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 23, 2025
@tpantelis
Copy link
Contributor Author

/retest

@tpantelis tpantelis force-pushed the svc_type_conflict branch from e505557 to 52efdcf Compare June 23, 2025 16:31
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 23, 2025
@tpantelis
Copy link
Contributor Author

/retest

@tpantelis
Copy link
Contributor Author

tpantelis commented Jun 23, 2025

Looks great thanks for the refactoring and the additional test! /lgtm

@MrFreezeex Thanks. Sorry had to make one more tweak.

"report a Conflict condition on the ServiceExport", Label(RequiredLabel), func() {
AddReportEntry(SpecRefReportEntry, "https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api#headlessness")

t.awaitServiceExportCondition(&clients[1], v1alpha1.ServiceExportConflict, metav1.ConditionTrue)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW shouldn't this be done on both clusters?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The KEP says this for reference:

Conflict resolution policy: If any properties have conflicting values that can not simply be merged, a ServiceExportConflict condition will be set on all ServiceExports for the conflicted service with a description of the conflict.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(and it's also what we are doing elsewhere it seems)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know but that's problematic unless every cluster has access to the ServiceExports on every other cluster or there's a central controller that has access to all. Submariner has neither. This is an assumption that the KEP makes that it really shouldn't that's been discussed in the past. Plus I don't think it really needs to be on every ServiceExport in this case. The important thing is that it's on the ServiceExport in the cluster that is actually in conflict.

Copy link
Member

@MrFreezeex MrFreezeex Jun 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes ok I see, well to me the conformance test should have consistent tests so just for this matter I think it should have the test for both cluster because I don't think the other conflict tests are any different.

That being said I would be supportive of loosing the tone on the KEP to accommodate your need (for instance saying that implementation are required to have a conflict condition on the "loosing service export" but that it's recommended to have it on every ServiceExport if feasible) and to change all tests doing similar check once merged.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And as I was saying previously loosing the tone of the KEP to make your current implementation compliant would be fine to me at least.

👍 Agree.

Ouch this is a lot of hoops to do that 😅, still nicely executed though!

Well the EndpointSlices were already there so I was able to conveniently use them for port conflict checking 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MrFreezeex Based on recent discussions, the consensus is to keep the current language in the KEP wrt to conflict conditions. I realized there's a rather simple way in Submariner we can handle applying the condition on all exporting clusters. So I changed the test to verify both clusters apply the condition.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i thought we decided to relax the statement? for some implementation, applying the conditions on every exporting clusters are not feasible, and not necessary. The conflicted service from other clusters should not stop/impact my existing exporting traffic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhiying-lin There was a discussion re: relaxing it on the SIG call but there was pushback by some folks. I'm not pursuing it any more from the Submariner side as I found a straightforward way we can handle it but, if you have an implementation facing the same difficulty, you can revive the discussion.

@skitt
Copy link
Member

skitt commented Jun 25, 2025

/approve

Thanks!

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 25, 2025
tpantelis added a commit to tpantelis/k8s-enhancements that referenced this pull request Jun 25, 2025
The KEP states that "a ServiceExportConflict condition will be set on
all ServiceExports for the conflicted service", however this assumes
an implementation has a central controller that has access to all the
constituent ServiceExports or that each cluster has access to the
ServiceExports on every other cluster but this may not be the case.

This PR modifies the language to recommend the condition be set on all
ServiceExports but not require it.

See the motivation and further discussion here:
kubernetes-sigs/mcs-api#111 (comment)

Signed-off-by: Tom Pantelis <tompantelis@gmail.com>
Fixes kubernetes-sigs#92

Signed-off-by: Tom Pantelis <tompantelis@gmail.com>
@tpantelis tpantelis force-pushed the svc_type_conflict branch from 52efdcf to 01c38bd Compare July 15, 2025 17:26
@MrFreezeex
Copy link
Member

MrFreezeex commented Jul 15, 2025

@tpantelis does your last change / the KEP pr in draft means that you find a way to do this in Submariner and you want to proceed with testing both condition?

Happy to approve/add a lgtm here if yes :D

EDIT: answered here: #111 (comment)

Copy link
Member

@MrFreezeex MrFreezeex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks 🙏
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 15, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: MrFreezeex, skitt, tpantelis

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit 20de0eb into kubernetes-sigs:master Jul 15, 2025
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Conformance test: verify resolution of conflicting service types
5 participants