Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-proxy component config graduation to beta #1104

Merged
merged 1 commit into from
Sep 20, 2019

Conversation

rosti
Copy link
Contributor

@rosti rosti commented Jun 13, 2019

This change is intended to propose a process and desired goals by which kube-proxy's component configuration is to be graduated to beta.

Note: This is set to implementable as work can start on it as soon as it merges.
ToDo: Tracking issue.

/assign @thockin
/cc @luxas @mtaufen @sttts @vllry

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/network Categorizes an issue or PR as relevant to SIG Network. labels Jun 13, 2019
@sttts
Copy link
Contributor

sttts commented Jun 13, 2019

/assign @squeed

@vllry
Copy link
Contributor

vllry commented Jun 16, 2019

Thanks for posting. I can see the rationale behind all the struct split choices (mode, platform, instance/shared), but put together it seems complex for users to configure. Do you have adoption feedback from a comparable config breakdown?

@rosti
Copy link
Contributor Author

rosti commented Jun 17, 2019

Thanks for posting. I can see the rationale behind all the struct split choices (mode, platform, instance/shared), but put together it seems complex for users to configure. Do you have adoption feedback from a comparable config breakdown?

I have some feedback from kubeadm, which underwent a similar split on its path to beta. We mostly saw some confusion among existing users, who were accustomed to a large flat config (much like what is kube-proxy's today). However, these were rare cases and usually were easily fixable.
Also, in kubeadm, we provided a config migration tool and a good documentation.

@neolit123
Copy link
Member

/assign

Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't follow the abstract names here - can we get specific?

Can you please make sure all comments that are going to be addressed are addressed and that all conversations are marked as "resolved"? It's not clear if there is pending work to be done.

Lastly: We've talked about breaking kube-proxy up into either platform-specific binaries or else per-mode binaries. If the config echos that (i.e. don't try to de-dup "common" fields), that might feel "redundant" but actually make for easier migration...

keps/sig-network/20190613-kube-proxy-component-config.md Outdated Show resolved Hide resolved
keps/sig-network/20190613-kube-proxy-component-config.md Outdated Show resolved Hide resolved
#### Example

```yaml
kind: KubeProxyInstanceConfiguration
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this is pretty abstract - can we talk in concrete terms? Which SPECIFIC fields do we see as instance?

@mtaufen at some point we argued about leaving instance-specific configs as flags, but I forget where we eventually settled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have it updated with concrete fields (HostnameOverride and the *BindAddress fields). I also removed the mode specific stuff from there, as this is not applicable to the kube-proxy case, but was done more from an academic accuracy standpoint.
As far as the instance specific split is concerned, I am not fully sure about it. Certainly, we can enable instance specific (and probably shared too) fields to be overwritten from the command line. It's also certain, that some people may prefer the config file option for instance specifics. In that case a split is nice to have, as the shared config won't have to be patched (and therefore we can eliminate a possible source of errors).
However, this poses its own set of headaches - such as "What to do if we get a couple of InstanceConfigurations from different files?".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thockin as I remember it we were both ok with it as a stopgap but it's not really a long term solution if we want the interface to the component to be uniform (in terms of versioning and schema definition).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"What to do if we get a couple of InstanceConfigurations from different files?"

Is that likely to happen? Seems like something we should just reject.

@rosti rosti force-pushed the kube-proxy-config branch 2 times, most recently from 2ab5abe to df86ad8 Compare July 12, 2019 15:17
@thockin
Copy link
Member

thockin commented Jul 12, 2019 via email

@rosti
Copy link
Contributor Author

rosti commented Jul 18, 2019

@thockin @mtaufen @vllry @neolit123 I would appreciate another round of review, thank you!

bindAddress: 1.2.3.4
---
apiVersion: kubeproxy.config.k8s.io
kind: SharedConfiguration
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a Q, have we established naming conventions for API types (possibly should be done in WG component-base).
e.g. kubeadm has InitConfiguration (no component name prefix), yet kubelet has KubeletConfiguration (component name prefixes the config). kubeproxy v1alpha1 has KubeProxyConfiguration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, that as kinds should not have a component prefix. It's already given in the API group.
The only exception to this should be the cases, where a component config API group has a single kind in it. In that case it may be more practical to name the kind KubeProxyConfiguration as opposed to just Configuration.
However, this is just a personal preference of mine and I am almost certain, that it isn't codified anywhere.

Copy link
Contributor

@mtaufen mtaufen Jul 22, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prefix is there for historical reasons - all componentconfig used to be in the same API group.

I did use {Component}Configuration as the convention in the Versioned Component Configuration Files doc: https://docs.google.com/document/d/1FdaEJUEh091qf5B98HM6_8MS764iXrxxigNIdwHYW9c/edit# because it's easy to read and was the convention at that point.

But it doesn't really matter. The package and/or API group information is typically close at hand.

Config and InstanceConfig could work fine as the convention IMO.

Fingers crossed I'm not contradicting something I said a long time ago 😉

@mtaufen
Copy link
Contributor

mtaufen commented Jul 22, 2019

@rosti have you had a chance to update the KEP per the resolved conversation threads?

@rosti
Copy link
Contributor Author

rosti commented Jul 22, 2019

@mtaufen I did it and I think, that I've covered all (although I may have missed something).

@vllry
Copy link
Contributor

vllry commented Jul 22, 2019

I'm not a fan of splitting up the config this much.

  • Mode-specific config is a great idea
  • Instance vs shared config is an okay idea
  • Platform-specific config doesn't seem valuable

I'll rattle my sabre more if this resonates with other reviewers, otherwise I'll LGTM and we'll see what user feedback from the beta is.

@justinsb
Copy link
Member

FYI I drafted a KEP to make more use of optional fields in componentconfig: #1173 I am not sure how that impacts beta plans here.

@mtaufen
Copy link
Contributor

mtaufen commented Jul 29, 2019

@vllry

Platform-specific config doesn't seem valuable

I can agree with this as long as each mode applies to at most one platform (or, I suppose, is so generic it will always work everywhere).

Also agree with your other two points.

@rosti
Copy link
Contributor Author

rosti commented Jul 29, 2019

One possibility, that I can think of is to remove the platform and instance/shared splitting parts for now. Thus, leave the mode consolidation bit for now.
Instead, we can have a reassessment after an alpha version is released with the mode consolidation changes as part of it.
@mtaufen @vllry @thockin WDYT?

@mtaufen
Copy link
Contributor

mtaufen commented Jul 30, 2019

Since we have the most consensus on the modes refactor, let's prototype that in a new alpha version. Maybe update the KEP to lay out the steps:

  1. Modes refactor.
  2. Instance/shared refactor.
  3. Think about platform config if it turns out to be necessary after 1 and 2.

Then let's get this merged.

Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
@rosti
Copy link
Contributor Author

rosti commented Aug 14, 2019

Updated by narrowing down the proposal to just mode consolidation for now. Hence, the platform split and the instance local/shared split is removed from the KEP for now.
Those may be reintroduced (along with some new proposals) after more thorough discussion later on in some of the next release cycles.
@mtaufen @vllry @thockin PTAL

@mtaufen
Copy link
Contributor

mtaufen commented Sep 20, 2019

This lgtm as a good next step for kube-proxy.
/lgtm

@rosti would you be interested in writing a more general componentconfig KEP for instance config?

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 20, 2019
@mtaufen
Copy link
Contributor

mtaufen commented Sep 20, 2019

Need someone from sig-network to approve. @thockin?

Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll approve the KEP for progress sake, but I have not had time to think deeply about a more ideal schema.

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mtaufen, rosti, thockin

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 20, 2019
@k8s-ci-robot k8s-ci-robot merged commit d5f6bbe into kubernetes:master Sep 20, 2019
@k8s-ci-robot k8s-ci-robot added this to the v1.17 milestone Sep 20, 2019
@rosti
Copy link
Contributor Author

rosti commented Sep 24, 2019

@mtaufen

@rosti would you be interested in writing a more general componentconfig KEP for instance config?

I am, but I am not sure if it would be in this cycle (as I've got quite a few things to do ATM).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/network Categorizes an issue or PR as relevant to SIG Network. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants