-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add pod log streaming to monitor for etcd so we see all intervals #28243
add pod log streaming to monitor for etcd so we see all intervals #28243
Conversation
Job Failure Risk Analysis for sha: f44ee81
|
4648b43
to
f086693
Compare
c.watchers = map[podKey]*watcher{} | ||
} | ||
|
||
// Run starts the controller and blocks until stopCh is closed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I realize this is probably boilerplate but stopCh
here refers to ctx>Done() right? Thought it was finishedCleanup but that's just the signal for a clean shutdown I think.
// TODO set a timeout? | ||
c.removeAllWatchers(context.TODO()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10 seconds? Should be enough for all the pods in the etcd namespace?
ctx, cancel := context.WithTimeout(ctx, time.Duration(time.Second*10))
defer cancel()
c.removeAllWatchers(ctx)
Although the cluster is going away so it may not matter if we don't finish cleanup.
|
||
var ( | ||
// "raft.node: 38360899e3c7337e elected leader d8a2c1adbed17efe at term 6" | ||
electedLeaderRegex = regexp.MustCompile("elected leader (?P<CURR_LEADER>[a-z0-9.-]+) at term (?P<TERM>[0-9]+)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL named capture groups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Still grasping the interval builder bits but the log streaming and matching looks good.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, hasbro17 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Job Failure Risk Analysis for sha: f086693
|
f086693
to
da60f02
Compare
New changes are detected. LGTM label has been removed. |
simple rebase, reapplying lgtm |
/retest-required |
@deads2k: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Job Failure Risk Analysis for sha: da60f02
|
ought to handle