Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Async logging #988

Closed
kishaningithub opened this issue Aug 9, 2021 · 8 comments
Closed

Async logging #988

kishaningithub opened this issue Aug 9, 2021 · 8 comments

Comments

@kishaningithub
Copy link

Does zap support async logging?

@kishaningithub
Copy link
Author

My use case is the one mentioned here

sirupsen/logrus#725 (comment)

@abhinav
Copy link
Collaborator

abhinav commented Aug 9, 2021

Hello! Yes, depending on your needs, you should be able to use the recently added zapcore.BufferedWriteSyncer to write your logs to disk asynchronously.
The BufferedWriteSyncer will buffer output in memory, and flush it to disk at some interval, or if the buffer grows to its maximum size.

Hope this helps!

Closing since the question was answered.

@abhinav abhinav closed this as completed Aug 9, 2021
@kishaningithub
Copy link
Author

It does not seem to be async if you see above we are using mutex locks

Is there any other alternatives?

@kishaningithub
Copy link
Author

Why not use channels with a large/configurable size? That way, writes happen to channel from the calling goroutine and another goroutine reads data and sends it to the underlying writer which can be a buffered_write_syncer to have fine grained control of the flush

@abhinav
Copy link
Collaborator

abhinav commented Aug 11, 2021

The mutex inside BufferedWriteSyncer does not change whether logging is happening asynchronously here. Sorry, let me clarify my understanding of the problem, and why I think BufferedWriteSyncer solves that.

The question is whether there is a way to log with Zap in such a way that disk IO, i.e. writing the logs to a file, does not happen on the request path. We want it so that the following statement does not perform any disk IO.

logger.Info("foo")

Instead, we want the statement to record the log message in-memory, and then we want a separate goroutine to write to disk out of band. This will have the effect of not blocking the main goroutine on IO.

And that's exactly what BufferedWriteSyncer will do: it will write to an internal buffer, flushing to the file as needed in a separate goroutine. I'll grant that this isn't 100% asynchronous because if logs are written at a rate faster than flush can keep up, like if there's a sudden spike, then one of the log calls will wait for the BufferedWriteSyncer to flush.

@springionic
Copy link

My ideal asynchronous log writing is a goroutine that processes the asynchronous log from the channel all the time, and the goroutine blocks when there is no data in the channel. Of course, this channel has a certain capacity or memory capacity setting.

In this way, there will indeed be blocking when a large number of logs are written, but when there is no log writing, the goroutine is blocked.

The main reason is that I want to write logs to the console and to a file at the same time, I think BufferedWriteSyncer is really great for writing logs to a file periodically. But for me to set an infinitely small FlushInterval in order to print the log on the console almost all the time, I feel that BufferedWriteSyncer is not very suitable.

Looking forward to your reply, thank you. 谢谢!
@abhinav cc @kishaningithub

Here is an example of what I want to achieve, relatively simple buffered channel implementation for go 1.18

var (
	sugarLogger   *zap.SugaredLogger
	loggerChannel = make(chan *logItem, 5000)
)


func InitLogger(opt Options) error {
	encoderConfig := zap.NewProductionEncoderConfig()
	encoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
	encoderConfig.EncodeLevel = zapcore.CapitalLevelEncoder
	encoder := zapcore.NewJSONEncoder(encoderConfig)

	logWriter := zapcore.AddSync(&lumberjack.Logger{
		Filename:   opt.LogFileName,
		MaxSize:    opt.MaxSize,
		MaxAge:     opt.MaxAge,
		MaxBackups: opt.MaxBackups,
		LocalTime:  false,
		Compress:   false,
	})
	// string to level
	var l zapcore.Level
	if err := l.Set(opt.Level); err != nil {
		return err
	}
	// 同时打印到控制台和文件
	core := zapcore.NewCore(encoder, zapcore.NewMultiWriteSyncer(zapcore.AddSync(os.Stdout), logWriter), zap.NewAtomicLevelAt(l))
	logger := zap.New(core, zap.AddCaller(), zap.AddCallerSkip(1), zap.AddStacktrace(zap.ErrorLevel))
	sugarLogger = logger.Sugar()

	go func() {
		for {
			select {
			case log := <-loggerChannel:
				logByLevel(log.level, log.template, log.args)
			}
		}
	}()

	return nil
}

type logItem struct {
	args     []any
	template string
	level    zapcore.Level
	ctx      context.Context
}


func Info(template string, args ...any) {
	loggerChannel <- &logItem{level: zapcore.InfoLevel, template: template, args: args}
}


func Error(template string, args ...any) {
	loggerChannel <- &logItem{level: zapcore.ErrorLevel, template: template, args: args}
}


func logByLevel(level zapcore.Level, template string, args []any) {
	switch level {
	case zapcore.DebugLevel:
		sugarLogger.Debugf(template, args...)
	case zapcore.InfoLevel:
		sugarLogger.Infof(template, args...)
	case zapcore.WarnLevel:
		sugarLogger.Warnf(template, args...)
	case zapcore.ErrorLevel:
		sugarLogger.Errorf(template, args...)
	case zapcore.DPanicLevel:
		sugarLogger.DPanicf(template, args...)
	case zapcore.PanicLevel:
		sugarLogger.Panicf(template, args...)
	case zapcore.FatalLevel:
		sugarLogger.Fatalf(template, args...)
	}
}

@kishaningithub
Copy link
Author

Adding to the above, here is another example of the same with next level composability from log15 library

https://github.com/inconshreveable/log15/blob/8562bdadbbac83bb35eb0c37023a12c5ed8793f7/handler.go#L255

@GuiGav
Copy link

GuiGav commented Jul 28, 2022

Hi,
I'm having the same need because I'm posting my logs via http. I'm using a custom sink.
I definitely need async processing as well, but in my case I think it is needed at sink level, so a slow http call is not blocking logs to stderr. This way I can :

  • log everything to stderr
  • log what I can to http, if my buffer is full, I flush it, lost the logs and send an error that will be logged by zap

Here's the sink code :


type httpSink struct {
	logChan     chan []byte
	errChan     chan error
	restyClient *resty.Client
	host        *url.URL
}

func newHTTPSink(url *url.URL) (zap.Sink, error) {
	sink := httpSink{
		logChan:     make(chan []byte, LogBufferCapacity),
		errChan:     make(chan error, LogBufferCapacity),
		restyClient: resty.New(),
		host:        url,
	}
	go sink.emit()
	return sink, nil
}

func (h httpSink) Write(p []byte) (int, error) {
	h.logChan <- p
	select {
	// If there is an error in the error channel, from previous logs, we return it now
	case err := <-h.errChan:
		return len(p), err
	default:
		return len(p), nil
	}
}

func (h httpSink) Sync() error {
	return nil
}

func (h httpSink) Close() error {
	close(h.errChan)
	close(h.logChan)
	return nil
}

func (h httpSink) emit() {
	for {
		if len(h.logChan) == LogBufferCapacity {
			// if log buffer is full, we empty it, these logs are lost
			for i := 0; i < LogBufferCapacity; i++ {
				<-h.logChan
				select {
				case <-h.errChan:
				default:
				}
			}
			h.errChan <- ErrFullLogBuffer
			continue
		}

		body := <-h.logChan
		resp, err := h.restyClient.NewRequest().SetBody(body).Post(h.host.String())
		if err != nil {
			h.errChan <- err
		}
		if resp.IsError() {
			h.errChan <- fmt.Errorf("couldn't send log to http sink (status:%d, body:%s)", resp.StatusCode(), string(resp.Body()))
		}
	}
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants