Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define policy around klog.Warning usage in kubeadm #1913

Open
fabriziopandini opened this issue Nov 13, 2019 · 22 comments
Open

Define policy around klog.Warning usage in kubeadm #1913

fabriziopandini opened this issue Nov 13, 2019 · 22 comments
Assignees
Labels
area/UX kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@fabriziopandini
Copy link
Member

In the kubeadm output there are some logs entry that should be fixed:

[init] Using Kubernetes version: v1.16.2
W1113 10:20:56.260581     589 validation.go:28] Cannot validate kubelet config - no validator is available
W1113 10:20:56.260638     589 validation.go:28] Cannot validate kube-proxy config - no validator is available
[preflight] Running pre-flight checks

[control-plane] Creating static Pod manifest for "kube-apiserver"
W1113 10:29:17.627822    1065 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1113 10:29:17.633914    1065 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1113 10:29:17.635821    1065 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[

W1113 10:28:15.286513    1065 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks

Logs should be linked to a log level or converted into fmt.Printf similar to other outputs

/cc @neolit123 @rosti

@fabriziopandini fabriziopandini added kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. area/UX labels Nov 13, 2019
@fabriziopandini fabriziopandini added this to the v1.17 milestone Nov 13, 2019
@SataQiu
Copy link
Member

SataQiu commented Nov 14, 2019

/assign

@rosti
Copy link

rosti commented Nov 14, 2019

Logs should be linked to a log level or converted into fmt.Printf similar to other outputs

In klog, log levels exist and are applicable only to the "info" severity. Hence I presume, that you want to narrow down klog usage to the info severity and remove errors and warnings completely. Is that the case?

I do agree, that we need a more clearly defined policy in the use of klog and printfs. We have to take into account, that kubeadm is used by automated tools and end users alike. Swinging into one direction is going to hamper one of the user groups.

@neolit123 neolit123 changed the title Fix log entries in the kubeadm output Define policy around klog.Warning usage in kubeadm Nov 14, 2019
@neolit123 neolit123 added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. labels Nov 14, 2019
@neolit123
Copy link
Member

neolit123 commented Nov 14, 2019

klog.Error and klog.Warning are parts of the klog logger and are used widely in k8s.

if kubeadm decides to not use anything but klog.V(x).Info that is fine, and it has the freedom to do so. but my suggestion is to do that in one PR that swipes them all.

changed the title to reflect that we are having a discussion.

also noting that users that are annoyed by klog output can always pipe stderr to /dev/null.

but to expose the wider problem and to be completely fair, our mixture of stdout (printf) and stderr (klog) is messy.

  • ideally kubeadm should stop mixing printf and klog.
  • all output should be printed using the same logger
  • the logger backend should be abstacted and klog should not be imported per file.
  • all output should go to the same stream.
  • klog should start supporting omitting the "line info" W1113 10:29:17.635821 1065 manifests.go:214] (forgot what this is called in the klog source)
  • we can disable the "line info" by default and have a flag to enable it

@neolit123
Copy link
Member

@SataQiu
looks like you sent kubernetes/kubernetes#85382
but we haven't decided how to proceed yet. :)

@SataQiu
Copy link
Member

SataQiu commented Nov 17, 2019

Yes @neolit123 Just have a try!

@bart0sh
Copy link

bart0sh commented Nov 18, 2019

Just wondering would it make sense to use cmd/kubeadm/app/util/output API to solve this? It would also help to unify output and implement structured output.

@neolit123
Copy link
Member

@bart0sh i'm +1 to use any unified backend.
but there are some decisions to make regarding stdout vs stderr and whether we want to continue using klog.

@neolit123
Copy link
Member

i'm going to investigate:

klog should start supporting omitting the "line info" W1113 10:29:17.635821 1065 manifests.go:214]

@neolit123 neolit123 modified the milestones: v1.17, v1.18 Nov 19, 2019
@ejmarten
Copy link

Considering that this is a warning and not an error, it should show up in a log that is showing the correct log level, not on stderr.

@polarapfel
Copy link

Here's a quick fix to folks being thrown off by this behaviour in their automation scripts: redirect stderr to /dev/null (or elsewhere).

For example, if you wanted the join command, you'd do this

kubeadm token create --print-join-command 2>/dev/null

@GissellaSantacruz
Copy link

Try kubeadm reset, and then try again.
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

@neolit123 neolit123 modified the milestones: v1.18, v1.19 Mar 8, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 6, 2020
@neolit123 neolit123 modified the milestones: v1.20, v1.21 Dec 2, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 2, 2021
@neolit123
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 2, 2021
@neolit123 neolit123 modified the milestones: v1.21, v1.22 Mar 9, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 7, 2021
@neolit123 neolit123 modified the milestones: v1.22, v1.23 Jul 5, 2021
@neolit123
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 26, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2021
@neolit123
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2021
@neolit123 neolit123 modified the milestones: v1.23, v1.24 Nov 23, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 21, 2022
@neolit123 neolit123 added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 22, 2022
@neolit123 neolit123 modified the milestones: v1.24, v1.25 Mar 29, 2022
@neolit123 neolit123 modified the milestones: v1.25, Next May 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/UX kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests