-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define policy around klog.Warning usage in kubeadm #1913
Comments
/assign |
In klog, log levels exist and are applicable only to the "info" severity. Hence I presume, that you want to narrow down klog usage to the info severity and remove errors and warnings completely. Is that the case? I do agree, that we need a more clearly defined policy in the use of klog and printfs. We have to take into account, that kubeadm is used by automated tools and end users alike. Swinging into one direction is going to hamper one of the user groups. |
klog.Error and klog.Warning are parts of the klog logger and are used widely in k8s. if kubeadm decides to not use anything but klog.V(x).Info that is fine, and it has the freedom to do so. but my suggestion is to do that in one PR that swipes them all. changed the title to reflect that we are having a discussion. also noting that users that are annoyed by klog output can always pipe stderr to /dev/null. but to expose the wider problem and to be completely fair, our mixture of stdout (printf) and stderr (klog) is messy.
|
@SataQiu |
Yes @neolit123 Just have a try! |
Just wondering would it make sense to use cmd/kubeadm/app/util/output API to solve this? It would also help to unify output and implement structured output. |
@bart0sh i'm +1 to use any unified backend. |
i'm going to investigate:
|
Considering that this is a warning and not an error, it should show up in a log that is showing the correct log level, not on stderr. |
Here's a quick fix to folks being thrown off by this behaviour in their automation scripts: redirect stderr to /dev/null (or elsewhere). For example, if you wanted the join command, you'd do this
|
Try kubeadm reset, and then try again. The reset process does not reset or clean up iptables rules or IPVS tables. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
In the kubeadm output there are some logs entry that should be fixed:
Logs should be linked to a log level or converted into fmt.Printf similar to other outputs
/cc @neolit123 @rosti
The text was updated successfully, but these errors were encountered: