-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Kubernetes] stack services filters #1023
[Kubernetes] stack services filters #1023
Conversation
@@ -69,3 +69,8 @@ func (s *Factory) ReplicationControllers() corev1.ReplicationControllerInterface | |||
func (s *Factory) ReplicaSets() typesappsv1beta2.ReplicaSetInterface { | |||
return s.appsClientSet.ReplicaSets(s.namespace) | |||
} | |||
|
|||
// DaemonSets return a client for kubernetes daemon sets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
returns
var presentLabels []string | ||
valuedLabels := map[string][]string{} | ||
|
||
for _, rawLabel := range filters.Get("label") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extract this to a function?
presentLabels, valuedLabels := parseLabels(filters.Get("label"))
|
||
servicesList, err := client.Services().List(metav1.ListOptions{LabelSelector: labels.SelectorForStack(opts.Namespace)}) | ||
servicesList, err := client.Services().List(metav1.ListOptions{LabelSelector: labelSelector}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should extract all the labelSelector creation logic and add unit tests.
@@ -161,6 +162,45 @@ func replicasToServices(replicas *appsv1beta2.ReplicaSetList, services *apiv1.Se | |||
Replicas: fmt.Sprintf("%d/%d", r.Status.AvailableReplicas, r.Status.Replicas), | |||
} | |||
} | |||
for i, d := range daemons.Items { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could factorize with the code above, and extract it to a function taking the labels and the containers as parameters (it seams these are the only dependencies on replicaSets and daemonSets).
9b348d8
to
712390f
Compare
@simonferquel could you elaborate a bit? I think name filters work on full matches (but there was a bug at some point that incorrectly did partial matches: #72) $ docker stack ls
NAME SERVICES
foobar 1
$ docker stack ps foo
nothing found in stack: foo
$ docker stack ps foobar
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
u542iioaug1g foobar_web.1 nginx:alpine linuxkit-025000000001 Running Running 3 days ago
$ docker service ps foobar
no such service: foobar
$ docker service ps foobar_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
u542iioaug1g foobar_web.1 nginx:alpine linuxkit-025000000001 Running Running 3 days ago |
@thaJeztah the problem is not on the stack name, but when you use the |
470115b
to
ac5453c
Compare
} | ||
// name filters is done client side for matching partials | ||
|
||
for i := len(result) - 1; i >= 0; i-- { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This for
loop is hard to read 😓
result := make([]swarm.Service, len(replicas.Items)) | ||
infos := make(map[string]formatter.ServiceListInfo, len(replicas.Items)) | ||
func replicasToServices(replicas *appsv1beta2.ReplicaSetList, daemons *appsv1beta2.DaemonSetList, services *apiv1.ServiceList) ([]swarm.Service, map[string]formatter.ServiceListInfo, error) { | ||
result := make([]swarm.Service, len(replicas.Items)+len(daemons.Items)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
put len(replicas.Items)+len(daemons.Items)
in a var
cli/command/stack/kubernetes/ps.go
Outdated
@@ -55,7 +65,8 @@ func RunPS(dockerCli *KubeCli, options options.PS) error { | |||
for i, pod := range pods { | |||
tasks[i] = podToTask(pod) | |||
} | |||
return print(dockerCli, namespace, tasks, pods, nodeResolver, !options.NoTrunc, options.Quiet, format) | |||
|
|||
return print(dockerCli, stackName, tasks, pods, nodeResolver, !options.NoTrunc, options.Quiet, format) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This print
function takes way too many argument.. we should at least pass it options
and remove the last 3 (by putting if len(format) …
in print
).
Related to that, I would tend to prefer if format == ""
, as we check if the string is empty, it's easier to read.
if len(v) == 1 { | ||
result = append(result, fmt.Sprintf("%s=%s", k, v[0])) | ||
} else { | ||
// make result more testable by sorting values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't make a change "for the result to be more testable". Either we want to sort (and we always sort), or when testing, we check the existence of the element in the slice (and thus, not depending on the order)
func generateLabelSelector(f filters.Args, stackName string) string { | ||
names := f.Get("name") | ||
|
||
// make result more testable by sorting names |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
ac5453c
to
1d935a2
Compare
@vdemeester I believe I have addressed your comments, would you please take another look? |
1d935a2
to
646446e
Compare
looks like this needs a rebase as well |
} else { | ||
result = []apiv1.Pod{} | ||
for _, n := range nodes { | ||
podsList, err := pods.List(metav1.ListOptions{LabelSelector: labelSelector, FieldSelector: "spec.nodeName=" + n}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might result in tons of pods.List()
calls. Can we not do like spec.nodeName in (node1,node2,...)
? If not, we should probably just do one pods.List()
call and post-filter the results (or just remove the node
arg altogether).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have simplified the code and done the node filtering after querying for a full pods.List()
, this is much clearer in the end. Thanks for suggesting this!
return result | ||
} | ||
|
||
func parseLabelFilters(rawFilters []string) ([]string, map[string][]string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any code in the Kubernetes CLI we could reuse here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by Kubernetes CLI? kubectl?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm wondering if there are any public functions in the kubernetes/kubernetes
repo we could use here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this code transforms label filters (with swarm syntax) to their Kubernetes counterpart. I don't think there is any function in kubernetes/kubernetes
for that, as the kubectl
user will write its filters directly in Kubernetes syntax, which are resolved server side IIRC so kubectl
doesn't have to parse/interpret it.
} | ||
result = podsList.Items | ||
} else { | ||
result = []apiv1.Pod{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this actually needed, I suspect var result []apiv1.Pod
should be enough?
func fetchPods(stackName string, pods corev1.PodInterface, f filters.Args) ([]apiv1.Pod, error) { | ||
services := f.Get("service") | ||
nodes := f.Get("node") | ||
// for existing script compatibility, support either <servicename> or <stackname>_<servicename> format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the "existing script functionality"? Is that only in experimental versions? If so, we're ok to break old behavior, and we can remove this hack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"existing script functionality" here means existing user scripts that target Swarm
// name filters is done client side for matching partials | ||
|
||
for i := len(result) - 1; i >= 0; i-- { | ||
fullName := fmt.Sprintf("%s_%s", stackName, result[i].Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be
fullName := stackNamePrefix + result[i].Name
cli/command/stack/kubernetes/ps.go
Outdated
|
||
func print(dockerCli command.Cli, options options.PS, namespace string, client corev1.NodesGetter, pods []apiv1.Pod) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My IDE was complaining that print
collides with the builtin print()
function; perhaps rename to printTasks()
cli/command/stack/kubernetes/ps.go
Outdated
// Fetch pods | ||
if _, err := stacks.Get(namespace); err != nil { | ||
return fmt.Errorf("nothing found in stack: %s", namespace) | ||
filters := options.Filter.Value() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might want to move this more to the top; i.e. validate filters/input values before initialising the clients
cli/command/stack/kubernetes/ps.go
Outdated
for i, task := range tasks { | ||
nodeValue, err := nodeResolver(pods[i].Spec.NodeName) | ||
nodeValue, err := resolveNode(options.NoResolve, client.Nodes(), pods[i].Spec.NodeName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should client.Nodes()
be moved outside of the loop?
also, order of arguments is a bit odd; perhaps put name
first
cli/command/stack/kubernetes/ps.go
Outdated
if err != nil { | ||
return "", err | ||
} | ||
if len(n.Items) != 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could more than 1 be returned? (i.e., can node names be ambiguous?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the docs there is Kubernetes will create a node object internally (the representation), and validate the node by health checking based on the metadata.name field (we assume metadata.name can be resolved).
and the example shows
"metadata": {
"name": "10.240.79.157",
so I would assume if the name is not unique the duplicate node attempting to join the cluster will fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, missed your comment: so should we skip this validation (as k8s already handles this), or keep it?
|
||
for _, n := range names { | ||
if strings.HasPrefix(n, stackName+"_") { | ||
// also accepts with unprefixed service name (for compat with existing swarm scripts where service names are prefixed by stack names) |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
|
||
// RunServices is the kubernetes implementation of docker stack services | ||
func RunServices(dockerCli *KubeCli, opts options.Services) error { | ||
// Initialize clients |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment is a bit superfluous
|
||
replicasList, err := replicas.List(metav1.ListOptions{LabelSelector: labels.SelectorForStack(opts.Namespace)}) | ||
if err != nil { | ||
filters := opts.Filter.Value() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Best move this to the top; no need to initialise clients if filters are invalid
646446e
to
b759869
Compare
@thaJeztah @wsong I believe I have incorporated your suggestions, thank you both for your very helpful reviews. Would you care to take another look? Cheers! |
Oh by the way, here is the test suite I have been using as regression testing.
|
And similar tests for
|
for _, pod := range podsList.Items { | ||
if filterPod(pod, nodes) && | ||
// name filter is done client side for matching partials | ||
f.FuzzyMatch("name", stackNamePrefix+pod.Name) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So does this mean that we require the name field to be like --filter name=<stack name prefix> + <pod name>
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a FuzzyMatch
meaning with a stackNamePrefix
of dtcccccc
and a pod.Name
of backend-695899db65-tptbg
all these --filter
will match:
name=dtc
name=dtcccccc
name=dtcccccc-backend
name=dtcccccc-backend-695899db65-tptbg
However this one won't:
name=backend
This seems to be consistent with the behavior with --orchestrator=swarm
, however @simonferquel initially put Match
here thus allowing the following filters to match:
name=backend
name=ccc
etc…
I'm not sure which solution we want, the specification document was a bit vague and gave no example of expected outputs:
- Best effort mapping of --filter flag
- Best effort as some filters will not map perfectly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, aren't you just concatenating the stackNamePrefix
and pod.Name
with stackNamePrefix+pod.Name
? You said that dtcccccc-backend
should match, implying there should be a hyphen between stackNamePrefix
and pod.Name
, but that doesn't seem to be the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stackNamePrefix
actually ends with an hyphen, it's defined a few lines before as
stackNamePrefix := stackName + "_"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this is intentional, but that's an underscore, not a hyphen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the confusion, I made an error when writing the examples in #1023 (comment): the hyphens should have been underscores i.e.
with a `stackNamePrefix` of `dtcccccc` and a `pod.Name` of `backend-695899db65-tptbg` all these `--filter` will match:
- `name=dtc`
- `name=dtcccccc`
- `name=dtcccccc_backend`
- `name=dtcccccc_backend-695899db65-tptbg`
This needs a rebase |
b759869
to
1e6ea25
Compare
Done! |
@vdemeester can you take another look on this PR? 👋 |
cli/command/stack/kubernetes/ps.go
Outdated
return "", fmt.Errorf("could not find node '%s'", name) | ||
} | ||
return string(n.Items[0].UID), nil | ||
n, err := nodes.List(metav1.ListOptions{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're going to call this function for every single task, which is going to be super slow with large amounts of tasks. Is there any way we could just do a single nodes.List
call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, well spotted… I've changed the code to request the nodes only once and loop on the tasks afterwards.
cli/command/stack/kubernetes/ps.go
Outdated
return "", err | ||
} | ||
if len(n.Items) != 1 { | ||
return "", fmt.Errorf("could not find node '%s'", name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we keep this, consider changing this to something that mentions we found nodes, but that the name is ambiguous, e.g.;
fmt.Errorf("node name is ambiguous %q returned $d nodes", name, len(n.Items))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kubernetes manages everything by name so there is no chance for two nodes to have the same name.
Also the code has changed now, the nodes are fetched once in a slice which is then used to resolve the name with a clearer plain ==
comparison.
de5009b
to
249a3f3
Compare
I squashed all commits. I also updated |
if err != nil { | ||
return nil, nil, err | ||
} | ||
result[i+len(replicas.Items)] = *s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was there a reason for not doing;
result = append(result, *s)
result = append(result, fmt.Sprintf("%s=%s", k, v[0])) | ||
} else { | ||
sort.Strings(v) | ||
result = append(result, fmt.Sprintf("%s in (%s)", k, strings.Join(v, ","))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the in (x)
operator less efficient than x=y
, or will the selector be smart enough to optimise if there's a single value?
If so, we could simplify to;
for k, v := range valuedLabels {
sort.Strings(v)
result = append(result, fmt.Sprintf("%s in (%s)", k, strings.Join(v, ",")))
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I honestly don't know if it's really less efficient (we are talking about 1 equality vs a list of 1...)
var presentLabels []string | ||
valuedLabels := map[string][]string{} | ||
for _, rawLabel := range rawFilters { | ||
equalIndex := strings.Index(rawLabel, "=") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be simplified by using strings.SplitN()
;
func parseLabelFilters(rawFilters []string) ([]string, map[string][]string) {
var presentLabels []string
valuedLabels := map[string][]string{}
for _, rawLabel := range rawFilters {
v := strings.SplitN(rawLabel, "=", 2)
key := v[0]
if len(v) > 1 {
valuedLabels[key] = append(valuedLabels[key], v[1])
continue
}
presentLabels = append(presentLabels, key)
}
return presentLabels, valuedLabels
}
However, filters can appear both as valuedLabel
and as presentLabel
, is that expected? see https://play.golang.org/p/EuJk1d7b5Ts
If that's not the intent, we could treat presentLabels
and valuedLabels
as identical; a presentLabel
just being a valuedLabel
without value, and change generateValuedLabelsSelector()
to generateSelector()
, something like;
func generateSelector(labels map[string][]string) []string {
var result []string
for k, v := range labels {
switch len(v) {
case 0:
result = append(result, k)
case 1:
result = append(result, fmt.Sprintf("%s=%s", k, v[0]))
default:
sort.Strings(v)
result = append(result, fmt.Sprintf("%s in (%s)", k, strings.Join(v, ",")))
}
}
return result
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, filters can appear both as
valuedLabel
and aspresentLabel
, is that expected? see https://play.golang.org/p/EuJk1d7b5Ts
Good point, on docker
/swarm
it's basically a and
, i.e. --filter=label=foo --filter=label=foo=bar
is the same as --filter=label=foo=bar
. I think we should do/keep the same, and thus presentLabels
and valuedLabels
shouldn't have anything in common 👼
So yeah, we should tread them as identical and update/fix generateSelector
instead 👍
return nil, nil, nil, err | ||
} | ||
} else { | ||
replicasList = &appsv1beta2.ReplicaSetList{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed (or done already by var replicasList *appsv1beta2.ReplicaSetList
?)
return nil, nil, nil, err | ||
} | ||
} else { | ||
daemonsList = &appsv1beta2.DaemonSetList{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed (or done already by var replicasList *appsv1beta2.DaemonSetList
?)
Vendor check is somehow failing;
|
Not sure what goes wrong with the vendoring, the PR does not change any file in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to re-run vndr
. You removed some imports in the code which means some packages in vendor/
are no longer necessary.
return "", fmt.Errorf("could not find node '%s'", name) | ||
for _, node := range nodes.Items { | ||
if node.Name == name { | ||
return string(node.UID), nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the node.UID
really what we want? In Kube, there as you've already pointed out, names are the IDs, so there isn't really an analogue to the ID-vs-hostname split of Swarmkit (in Swarmkit, you can have two nodes with the same name, but that's not possible in Kube). As such, I think it'd make more sense to just ignore noResolve
for Kube instead of using this UID which isn't used anywhere else in the system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is more about having the same UX as Swarm than being really useful... Sure we can remove the noResolve flag and simplify the code, but in that case we may break some Swarm scripts which use this flag? From the start we choose to stick to Swarm UX, even if it is not very relevant sometimes. But maybe we are wrong and we should simplify where we can, WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, we can keep the flag, but I think when we translate Kube nodes to Swarm nodes, the ID
field and the name
field should contain the same value (i.e. the name of the node in Kube). There's no reason to expose the node.UID
field in docker node ls
; users will never need to know the value of node.UID
for the purposes of docker node ls
. It's not important that the ID
field contains an opaque, alphanumeric string; what's important is that it's unique, and in Kube the name is unique.
The same goes for everywhere else we use UID
in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
b7285d4
to
af9f0d8
Compare
), | ||
expectedSelectorParts: []string{ | ||
"com.docker.stack.namespace=test", | ||
"label1 in (test,test2)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to have a similar behavior between swarm
and k8s
on the filter, this test case is wrong.
--filter=label=foo=bar --filter=label=foo=baz
translate tofoo
being equal to bothbar
andbaz
(which can't happen, but that's another story)foo in (bar, baz)
in k8s translate tofoo
being equal tobar
orbaz
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok good point. What do you suggest to fix this? As it seems the AND is impossible is Swarm, maybe we shouldn't be able to write it? My point is that the OR makes more sens, so maybe we should error out on swarm side and keep this valid for Kubernetes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should have a look (separate from this PR) at improving the filter design, and have syntax for "and", "or", but also (regex) matching, negative matches and so on. (Was discussing that with @vdemeester yesterday, as it keeps coming up)
My point is that the OR makes more sense
Depends how you look at it; the "filter" was designed to reduce results until you have a match, so the more filters you add, the less results you get.
For name, yes, I agree, there 's likely not much use, but for labels
(e.g.) definitely (show me all objects having both label-x and label-y). On the other hand; in what cases would I be looking at a stack named "foo" or "bar"?
af9f0d8
to
20657db
Compare
Here is an update addressing the refactoring suggested (all comments for which I added a 👍 ).
Thanks! |
Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
20657db
to
1f7aa1c
Compare
This comment has been minimized.
This comment has been minimized.
d0fbb8e
to
5dff68f
Compare
@thaJeztah @vdemeester as suggested I added a commit to match Swarm and Kubernetes behavior when combining filters so we can move forward here. |
`docker stack services --filter=label=foo=bar --filter=label=foo=baz my-stack` with Swarm gets handled as `filter on (a label named foo with value bar) AND (a label named foo with value baz). This obviously yields an empty result set every time, but if and how this should be changed is out of scope here, so simply align Kubernetes with Swarm for now. Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
5dff68f
to
297866e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks everyone!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🐯
This PR implements stack services filtering, to align the
docker stack services
experience better.This also fix an issue where global services where missing from the list in Kubernetes mode.
To do that, I leveraged Kubernetes LabelSelectors (so everything is done server side), and translated as closely as possible existing supported filters.
One occurrence of behavior consistency, is that k8s label selectors only support perfect matches while in Swarm mode, name filters behave as a StartsWith comparison