-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't buffer doctor logger #19982
Don't buffer doctor logger #19982
Conversation
- We don't need to buffer the logger with a thousand capacity. It's not a high-throughput logger, this also caused issue whereby the logger can't keep up with repeated messages being send(somehow they are lost in the queue?). - Resolves go-gitea#19969
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, this fix can significantly reduce the possibility of log loose.
My thought is, as long as the code is :
put log in channel
fetch log from channel
output log
Then, if there is no real Flush
, there is still a (small possibility) chance that the exit occurs before the output
, which will lose the last log.
It's fine to have this fix in 1.17.
If my understanding is correct, it's better to add a TODO
or FIXME
in code, in case some future users will be bitten by this problem.~~
Eh, I haven't been able to find any lost logs anymore. Unless someone is still able to reproduce it with a low probability I'm fine with that otherwise the bug is fixed to me. |
Because the Gitea's logger system works like this (which is not ideal):
You may not see the "test" output. But if you add a And how Gitea's logger works:
I think there might be still a low possibility chance to make the log still in the channel when exiting, not that serious.
In most cases, then |
Yeah, but it seems like early exiting the program isn't the issue in this case, it's rather that close channel can take precedence over the queue channel.
You end up with non-clean code IMO, as within the This due that buffered channels don't seem to take precedence in a
diff --git a/modules/log/event.go b/modules/log/event.go
index f66ecd179..41b80c99c 100644
--- a/modules/log/event.go
+++ b/modules/log/event.go
@@ -328,6 +328,7 @@ func (m *MultiChannelledLog) Start() {
m.level = INFO
}
case event, ok := <-m.queue:
+ fmt.Printf("RECEIVING: %t, %#v\n", ok, event)
if !ok {
m.closeLoggers()
return
@@ -351,6 +352,14 @@ func (m *MultiChannelledLog) Start() {
}
m.rwmutex.RUnlock()
case <-m.close:
+ fmt.Println("CLOSING")
+ select {
+ case event, ok := <-m.queue:
+ fmt.Printf("RECEIVED AFTER CLOSING: %t, %#v\n", ok, event)
+ case <-time.After(100 * time.Millisecond):
+ fmt.Println("No item in queue after closing.")
+ }
+
m.closeLoggers()
return
}
@@ -359,10 +368,12 @@ func (m *MultiChannelledLog) Start() {
// LogEvent logs an event to this MultiChannelledLog
func (m *MultiChannelledLog) LogEvent(event *Event) error {
+ fmt.Printf("SENDING: %#v\n", event)
select {
case m.queue <- event:
return nil
case <-time.After(100 * time.Millisecond):
+ fmt.Println("TIMING ERROR")
// We're blocked!
return ErrTimeout{
Name: m.name, |
Yup, the Not serious, just my thoughts. 😊 And you are correct, 0 length channel doesn't have such problem. 👍 |
This comment was marked as off-topic.
This comment was marked as off-topic.
It'd probably be better to add the code the empty the eventstream into close than to get rid of the buffer really. I knew that this was a potential problem with the closing code but I thought that it would be a rare enough occurrence that this should not happen or actually matter. Not having a buffer means that there is the potential for the logger to block on logging and I think that that is more likely to cause problems than the one you're trying to fix. |
Agree to "empty the eventstream into close". ps: I think this PR has done what should do for 1.17. Maybe the logger system could be refactored in next milestone. And maybe |
OK, I think: diff --git a/modules/log/event.go b/modules/log/event.go
index f66ecd179..40beefedb 100644
--- a/modules/log/event.go
+++ b/modules/log/event.go
@@ -345,18 +345,41 @@ func (m *MultiChannelledLog) Start() {
m.closeLoggers()
return
}
+ m.emptyQueue()
m.rwmutex.RLock()
for _, logger := range m.loggers {
logger.Flush()
}
m.rwmutex.RUnlock()
case <-m.close:
+ m.emptyQueue()
m.closeLoggers()
return
}
}
}
+func (m *MultiChannelledLog) emptyQueue() bool {
+ for {
+ select {
+ case event, ok := <-m.queue:
+ if !ok {
+ return false
+ }
+ m.rwmutex.RLock()
+ for _, logger := range m.loggers {
+ err := logger.LogEvent(event)
+ if err != nil {
+ fmt.Println(err)
+ }
+ }
+ m.rwmutex.RUnlock()
+ default:
+ return true
+ }
+ }
+}
+
// LogEvent logs an event to this MultiChannelledLog
func (m *MultiChannelledLog) LogEvent(event *Event) error {
select {
Will do the correct thing here. |
It is possible for log events to remain in the buffer off the multichannelledlog and thus not be logged despite close or flush. This PR simply adds a function to empty the queue before closing or flushing. (Except when the logger is paused.) Reference go-gitea#19982 Signed-off-by: Andrew Thornton <art27@cantab.net>
* Empty log queue on flush and close It is possible for log events to remain in the buffer off the multichannelledlog and thus not be logged despite close or flush. This PR simply adds a function to empty the queue before closing or flushing. (Except when the logger is paused.) Reference #19982 Signed-off-by: Andrew Thornton <art27@cantab.net> * and do similar for ChannelledLog Signed-off-by: Andrew Thornton <art27@cantab.net> Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- We don't need to buffer the logger with a thousand capacity. It's not a high-throughput logger, this also caused issue whereby the logger can't keep up with repeated messages being send(somehow they are lost in the queue?). - Resolves go-gitea#19969 Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Empty log queue on flush and close It is possible for log events to remain in the buffer off the multichannelledlog and thus not be logged despite close or flush. This PR simply adds a function to empty the queue before closing or flushing. (Except when the logger is paused.) Reference go-gitea#19982 Signed-off-by: Andrew Thornton <art27@cantab.net> * and do similar for ChannelledLog Signed-off-by: Andrew Thornton <art27@cantab.net> Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- We don't need to buffer the logger with a thousand capacity. It's not a high-throughput logger, this also caused issue whereby the logger can't keep up with repeated messages being send(somehow they are lost in the queue?). - Resolves go-gitea#19969 Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Empty log queue on flush and close It is possible for log events to remain in the buffer off the multichannelledlog and thus not be logged despite close or flush. This PR simply adds a function to empty the queue before closing or flushing. (Except when the logger is paused.) Reference go-gitea#19982 Signed-off-by: Andrew Thornton <art27@cantab.net> * and do similar for ChannelledLog Signed-off-by: Andrew Thornton <art27@cantab.net> Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
MultiChannelledLog
'sStart
code).