-
Notifications
You must be signed in to change notification settings - Fork 45
Add conformance test to verify resolution of conflicting service types #111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add conformance test to verify resolution of conflicting service types #111
Conversation
We can improve it by utilizing the form of 'Eventually' that accepts a function that that takes a single Gomega argument which is used to make assertions. 'Eventually' succeeds only if all the assertions in the polled function pass. This is simpler than first polling to find the ServiceImport based on initial checks then making separate assertions to provide better error output that typically overlap with the initial checks. Signed-off-by: Tom Pantelis <tompantelis@gmail.com>
/cc @skitt @MrFreezeex |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great thanks for the refactoring and the additional test!
/lgtm
/retest |
e505557
to
52efdcf
Compare
/retest |
@MrFreezeex Thanks. Sorry had to make one more tweak. |
"report a Conflict condition on the ServiceExport", Label(RequiredLabel), func() { | ||
AddReportEntry(SpecRefReportEntry, "https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api#headlessness") | ||
|
||
t.awaitServiceExportCondition(&clients[1], v1alpha1.ServiceExportConflict, metav1.ConditionTrue) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW shouldn't this be done on both clusters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The KEP says this for reference:
Conflict resolution policy: If any properties have conflicting values that can not simply be merged, a ServiceExportConflict condition will be set on all ServiceExports for the conflicted service with a description of the conflict.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(and it's also what we are doing elsewhere it seems)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know but that's problematic unless every cluster has access to the ServiceExports
on every other cluster or there's a central controller that has access to all. Submariner has neither. This is an assumption that the KEP makes that it really shouldn't that's been discussed in the past. Plus I don't think it really needs to be on every ServiceExport
in this case. The important thing is that it's on the ServiceExport
in the cluster that is actually in conflict.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes ok I see, well to me the conformance test should have consistent tests so just for this matter I think it should have the test for both cluster because I don't think the other conflict tests are any different.
That being said I would be supportive of loosing the tone on the KEP to accommodate your need (for instance saying that implementation are required to have a conflict condition on the "loosing service export" but that it's recommended to have it on every ServiceExport if feasible) and to change all tests doing similar check once merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And as I was saying previously loosing the tone of the KEP to make your current implementation compliant would be fine to me at least.
👍 Agree.
Ouch this is a lot of hoops to do that 😅, still nicely executed though!
Well the EndpointSlices
were already there so I was able to conveniently use them for port conflict checking 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Submitted kubernetes/enhancements#5436
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MrFreezeex Based on recent discussions, the consensus is to keep the current language in the KEP wrt to conflict conditions. I realized there's a rather simple way in Submariner we can handle applying the condition on all exporting clusters. So I changed the test to verify both clusters apply the condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i thought we decided to relax the statement? for some implementation, applying the conditions on every exporting clusters are not feasible, and not necessary. The conflicted service from other clusters should not stop/impact my existing exporting traffic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhiying-lin There was a discussion re: relaxing it on the SIG call but there was pushback by some folks. I'm not pursuing it any more from the Submariner side as I found a straightforward way we can handle it but, if you have an implementation facing the same difficulty, you can revive the discussion.
/approve Thanks! |
The KEP states that "a ServiceExportConflict condition will be set on all ServiceExports for the conflicted service", however this assumes an implementation has a central controller that has access to all the constituent ServiceExports or that each cluster has access to the ServiceExports on every other cluster but this may not be the case. This PR modifies the language to recommend the condition be set on all ServiceExports but not require it. See the motivation and further discussion here: kubernetes-sigs/mcs-api#111 (comment) Signed-off-by: Tom Pantelis <tompantelis@gmail.com>
Fixes kubernetes-sigs#92 Signed-off-by: Tom Pantelis <tompantelis@gmail.com>
52efdcf
to
01c38bd
Compare
@tpantelis does your last change / the KEP pr in draft means that you find a way to do this in Submariner and you want to proceed with testing both condition? Happy to approve/add a lgtm here if yes :D EDIT: answered here: #111 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks 🙏
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: MrFreezeex, skitt, tpantelis The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Also refactored/improved
awaitServiceImport
in a separate commit.Fixes #92