-
Notifications
You must be signed in to change notification settings - Fork 576
Add "revision" to mesh config/operator spec #1260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I do not want to call this environment! It’s not indicative and will lead to a host of confusion. Please figure out a better name. Also, since you are adding this, add cluster to the mesh config as well as it is high time we had a proper external api for the cluster useful in variety of multi cluster scenarios. (right now it’s in kube secrets yuk!) |
The cluster stuff is far outside the scope of this PR. I dug up when we last discussed this and apparently TOC thought "install" was a better choice. I disagree but if others think that's the right choice we can go with it. |
The cluster stuff is relatively easy to add. and I think we have a helm chart variable for that already :(. So we are not introducing a new one. Its actually defaulting to Kubernetes! s/env/project/ ? i think thats the openshift term. How will citadel work? will there be cross mesh traffic? I dont even think it can work if each citadel has self signed randomly generated ca. auto mtls will break when traffic transits from one istio to another. |
This is different than what openshift is doing. They have multi-tenancy, where the two things are isolation. In the istio case, this means like two different teams running two different meshes. What we are doing here is not multi-tenancy. There are multiple control planes, but they may not be owned by different teams, so I don't think project works - it implies that they are not really associated with eachother. I wouldn't expect a workload to canary between projects? There will be cross mesh traffic. CA functionality works the same way as it does with multiple replicas, they are not all randomly generating, they will do it once and share that between all instances. We have been testing multiple control plane for many months prior to this and it has proven to work very well |
then call it control plane version or control plane id. |
So what is the label? Either
|
controlPlaneReplica? env seems very wrong and overloaded, please don't use that. Environments to this point in Istio has meant VMs vs. K8s vs. Mobile etc. control.istio.io/replica One of those would probably be good enough for now, at least better than environment. |
I am not dead set on environment, but I think all the alternatives are far more confusing. Replica means "an exact copy" which is not what this is, and seems like there would be confusion between replicas of a pod vs the different control planes.
apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
name: sidecar
labels:
istio.io/install: canary
spec: is really confusing to me. Environment has a lot of prior art being used in this way: https://en.wikipedia.org/wiki/Deployment_environment |
I am 100% opposed to using environment :) These are not different environments. Deployment environments should be different meshes, not different control planes within the same mesh. So I really really dislike environment as the name for these. istio.io/installation: canary seems fine to me. So does any of install.istio.io/version: canary I used replica because that's how you referred to this in your text comment, different replicas of the same version, but any of those work for me. My favorite looking at these is probably install.istio.io/version: canary, that seems very clear IMO. |
Alright I will shift my preference to I don't like So is |
The reason I suggested an "install" prefix was to allow disambiguation, we've talked about having a "canonical revision" label as part of the canonical service work, that one is something longer like service.istio.io/canonical-revision, so won't conflict with this, but I wouldn't want people to think they did that one when they really did this one. So being clear as part of the label what revision we're talking about seems useful? |
I think the decision between the prefix vs no prefix is mostly about convenience/length vs ambiguity. I don't have a strong opinion there. But if we do have a prefix, now we need to come up with 2 names which is an even harder problem 🙂 . If we do go with a prefix, I can't come up with a better one than If no one else has opinions i'll switch this to |
I swapped the current PR to "revision" and also replaced resource_suffix which is un-implemented and will be replaced by this, so that we can have the API set in 1.5 and don't need to deprecate a field, etc etc. cc @ostromart |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I agree on the install thing since these live long, but couldn't come up with something better. Operator? Control?
I could sort of convince myself that each of these represents an "install" of Istio control plane, so more of a noun than a verb, but yeah its not perfect.
I like 'revision' because it is also the term used by KNative, which
follows a similar model for their deployments.
However 'install' doesn't seem right - there is no reason for each
'revision' to actually be associated with a new install of anything.
Some 'revisions' may consist only on settings ( injection template, address
of the XDS server, etc ) - and may actually be implemented
by shared installations. In particular we can support multiple injector
'revisions' with a single webhook. And in some cases the
'revision' may consist of managed services that don't get installed, it
just indicates that the namespace is using a particular (remote)
XDS server.
…On Wed, Jan 29, 2020 at 2:30 PM Sven Mawson ***@***.***> wrote:
***@***.**** approved this pull request.
LGTM
I agree on the install thing since these live long, but couldn't come up
with something better. Operator? Control?
I could sort of convince myself that each of these represents an "install"
of Istio control plane, so more of a noun than a verb, but yeah its not
perfect.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#1260?email_source=notifications&email_token=AAAUR2XIBFIQK3A27KK33XTRAH7RNA5CNFSM4KM42E22YY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCTR4I7I#pullrequestreview-350471293>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAUR2RPNLXVLTKIEW6GXT3RAH7RNANCNFSM4KM42E2Q>
.
|
mesh/v1alpha1/config.proto
Outdated
// The name of the revision for the control plane. If this is set, only Istio configuration that is associated | ||
// with this revision (by the `istio.io/rev` label) or configuration that is global (missing | ||
// a `istio.io/rev` label) will be processed. | ||
string revision = 50; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed, this is not really needed and hard to set at least with kustomize.
The discovery address will be different for each revision - and used on injection, but we already have that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved for the operator part.
For mesh config - see comment, I think we can drop it.
/hold |
dropping mesh config |
/cherrypick release-1.5 |
@howardjohn: #1260 failed to apply on top of branch "release-1.5":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
(cherry picked from commit 7b905a0)
See design doc for more context:
https://docs.google.com/document/d/1d1z5PC8wvh9QiR3NesAaPm6w2HnE4-KIfjB7GB-_Wf8/edit#heading=h.qlmnvhzb2ib2
The intent here is to get this merged into 1.5 so we can support forward
compatibility for when the feature is fully completed in 1.6