-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make kubelet base configuration configmap with version #60287
Make kubelet base configuration configmap with version #60287
Conversation
Oh no, this is incorrect in some cases. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of improvement suggestions. Thanks @xiangpengzhao for the PR!
@@ -55,9 +58,17 @@ func CreateBaseKubeletConfiguration(cfg *kubeadmapi.MasterConfiguration, client | |||
return err | |||
} | |||
|
|||
var majorMinorVersion string | |||
version := strings.Split(strings.TrimPrefix(cfg.KubernetesVersion, "v"), ".") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd use pkg/util/version.ParseSemantic
here and then use fmt.Sprintf("%s.%s", v.Major(), v.Minor())
if err != nil { | ||
majorMinorVersion = "1.9" | ||
} else { | ||
majorMinorVersion = string(kubeletVersion.Major()) + "." + string(kubeletVersion.Minor()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: fmt.Sprintf("%s.%s", kubeletVersion.Major(), kubeletVersion.Minor())
var majorMinorVersion string | ||
kubeletVersion, err := preflight.GetKubeletVersion(utilsexec.New()) | ||
if err != nil { | ||
majorMinorVersion = "1.9" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just fallback on kubeadm's own version here instead in a generic sense or something?
you can use pkg/version
for that
if len(version) == 3 { | ||
majorMinorVersion = version[len(version)-1] | ||
} else { | ||
majorMinorVersion = "1.9" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get this value dynamically?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: timothysc, xiangpengzhao The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
99c9364
to
775db4c
Compare
@@ -55,9 +56,25 @@ func CreateBaseKubeletConfiguration(cfg *kubeadmapi.MasterConfiguration, client | |||
return err | |||
} | |||
|
|||
var configmapName string | |||
k8sVer, k8sErr := version.ParseSemantic(cfg.KubernetesVersion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea in this PR is:
For master and init
:
If kubernetes version is configured and can be parsed, use this version as the post-fix of configmap name, e.g., "-1.9"; else use kubeadm version.
For node and join
:
If kubelet version of the node can be gotten, use this version when getting configmap, else use kubeadm version.
But this will cause issues in below cases:
Suppose kubeadm version is 1.10.x
and kubernetes version is 1.9.x
and kubelet version is 1.10.x
:
- master parses kubernetes version successfully and writes configmap version as
1.9
; - node gets kubelet version as
1.10
and tries to get1.10
configmap, then fails.
Suppose kubeadm version is 1.10.x
and kubernetes version is 1.9.x
and kubelet version is 1.9.x
:
- master parses kubernetes version failed but gets kubeadm version successfully and writes configmap version as
1.10
; - node gets kubelet version as
1.9
and tries to get1.9
configmap, then fails.
Suppose kubeadm version is 1.10.x
and kubernetes version is 1.10.x
and kubelet version is 1.9.x
:
- master parses kubernetes version or gets kubeadm version successfully and writes configmap version as
1.10
; - node gets kubelet version as
1.9
and tries to get1.9
configmap, then fails.
@luxas @timothysc any suggestions? cc @kubernetes/sig-cluster-lifecycle-pr-reviews
/hold
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope kubernetes version (master) at 1.9.x
with a kubelet at 1.10.x
is impossible in all deployment scenarios. Masters should always be upgraded before clients.
I don't see why should the kubeadm version or the master version should ever be used to determine the config the kubelet gets. The kubelet version is what matters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope kubernetes version (master) at 1.9.x with a kubelet at 1.10.x is impossible in all deployment scenarios. Masters should always be upgraded before clients.
agreed.
I don't see why should the kubeadm version or the master version should ever be used to determine the config the kubelet gets. The kubelet version is what matters.
(base) kubelet config is embedded in MasterConfiguration
of kubeadm, so I think the kubeadm version matters in this case. The kubelet version is actually what matters though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we just talking about the kubelet on the master here? I thought we were talking about all kubelets (masters and nodes).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, we are talking about all kubelets as you thought :)
Now I'm thinking of whether we need to enable the DKC feature in kubeadm. If so, what benifits does this bring us? Currently what kubeadm does wrt DKC is writing/uploading the base kubelet configuration and making the nodes consume it. But since --init-config-dir
has been removed and --config
is supported, seems like the base kubelet configuration is no longer needed. Maybe we want to remove the DKC support in kubeadm? @mtaufen @luxas WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't kubeadm install the config file for --config
(which is the base kubelet configuration)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should. (https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#kubelet-drop-in)
But I'm not sure:
- if kubeadm still should configure anything for kubelet in
MasterConfiguration.KubeletConfiguration
and create the base ConfigMap. - if kubeadm should create an base ConfigMap via the config file for
--config
;
I think kubelet itself will create the ConfigMap from its --config
file so that kubeadm don't need to do anything for DKC. (I don't follow the very details of the latest DKC/--config
mechanism. Feel free to correct me if I'm wrong)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Kubelet does not create ConfigMaps nor does it configure Node.Spec.ConfigSource (the latter is blocked by the noderestriction admission controller - it's effectively a privilege escalation).
It may be better for Kubeadm to focus on --config
and hold off trying to manage DKC for now - at least until DKC is beta. There is coordination involved with DKC that may be more appropriate to an always-on controller - like what is being designed in the Machines API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be better for Kubeadm to focus on --config
@mtaufen do you mean migrating /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
to use --config
, or anything else?
and hold off trying to manage DKC for now - at least until DKC is beta.
agreed.
@luxas inline comments addressed. However, there are some cases should be addressed: #60287 (comment) |
775db4c
to
e07fd5b
Compare
@xiangpengzhao As each kubelet may have it own custmized config, shall we consider preparing a kubelet config map for each node instead of global kubelet config. |
@aoxn all of the kubelets in a kubeadm cluster nodes have the same config so far, but it sounds reasonable to customize kubelet config after install the cluster. |
Will this feature be proposed ASAP? @xiangpengzhao |
I'd like to get the idea from @mtaufen . What is done by kubeadm is to provide the initial base config for kubelet. You can still customize your kubelet config with the DKC feature of kubelet I think. |
If kubeadm is installing a new kubelet binary, and needs to tweak the base config in unison for some reason, it should also replace the base config at the same time as the binary. Part of the kubeletconfig API contract is that binary upgrades are backwards-compatible with your old config settings. Operationally, there are obviously cases where you need to version-pin dynamic config (bugs), but this functionality should probably be delegated to a higher-level controller. If kubeadm wants to manage these dynamic assignments, then ok. Make sure you can handle setting the correct config on new nodes that appear in autoscaling scenarios. |
@luxas @timothysc do we want to migrate the kubelet deb package to use |
@xiangpengzhao - soon... |
--config is beta in 1.10
…On Fri, Apr 13, 2018, 10:49 AM Timothy St. Clair ***@***.***> wrote:
@xiangpengzhao <https://github.com/xiangpengzhao> - soon...
@mtaufen <https://github.com/mtaufen> - what's the state?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#60287 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA3JwcdAHC4tT8Z59aBzfb69a55LTyqEks5toOVAgaJpZM4SQmOu>
.
|
@timothysc is it OK to migrate deb while |
[MILESTONENOTIFIER] Milestone Removed From Pull Request @timothysc @xiangpengzhao @kubernetes/sig-cluster-lifecycle-misc Important: This pull request was missing the |
@xiangpengzhao - we should incorporate, no need to wait on GA b/c backward compat guarantees. |
@xiangpengzhao: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @luxas |
I'm implementing a bit similar but other approach to fix this in #63887. Thanks for the initial work here to make us understand better what to do :)! |
Thanks @luxas ! |
What this PR does / why we need it:
kubernetes/kubeadm#571 (comment)
UPDATE:
This PR can't fix the comment above. To fix it, one possible approach is as suggested in the same comment:
maybe we can just document to use kubeadm phase kubelet config dynamic enable-for-node and make an alias for it as well
. The alias have been added in #57224.This PR only makes the name of configmap version-sensitive instead of hard-coding the name with "1.9".
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):ref kubernetes/kubeadm#571 (comment)
Special notes for your reviewer:
/cc @kubernetes/sig-cluster-lifecycle-pr-reviews
/cc @luxas
Release note: