Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Capturing logs with log_capture in behave #398

Closed
R2D34 opened this issue Feb 15, 2021 · 16 comments
Closed

Capturing logs with log_capture in behave #398

R2D34 opened this issue Feb 15, 2021 · 16 comments
Labels
question Further information is requested

Comments

@R2D34
Copy link

R2D34 commented Feb 15, 2021

Problem

I run my test with behave (https://behave.readthedocs.io/en/stable/api.html?highlight=log_capture#logging-capture) and after changing to the loguru my tests fail.

I have read this post Issue 59 (regarding basically the same but with pytest), and I understand that loguru doesn't use standardlib logging which behave log capture probably utilizes, and the way to solve the problem is to propagte the logs to the logging lib

however, I don't understand how to replicate the fix.

I tried using the fixture in an analogous way but it didn't seem to work:

Example of what I tried

environment.py

from api.features.fixtures import log_capture_fix


def before_all(context):
    use_fixture(log_capture_fix, context)
    context.config.setup_logging()

fixtures.py

import logging
from behave import fixture
from loguru import logger


@fixture
def log_capture_fix(context):
    class PropogateHandler(logging.Handler):
        def emit(self, record):
            logging.getLogger(record.name).handle(record)

    handler_id = logger.add(PropogateHandler(), format="{message} {extra}")
    yield context.log_capture
    logger.remove(handler_id)

Could you help me configure this properly, or point me in the right directions?

@Delgan
Copy link
Owner

Delgan commented Feb 16, 2021

Hi @Tsubanee.

I'm not familiar with behave, can you please share some file example with failing tests? Sure I'll try to help you.

@R2D34
Copy link
Author

R2D34 commented Feb 16, 2021

Behave allows you to write Gherking scenarios (humanlike language) which are connected to python functions, which are executed as tests.
Basically the tests that fail look like this:

from behave.log_capture import LoggingCapture
...
@then("The Hub callback is called")
def step_impl(context):
    reg_pattern = "Super Mighty Log Event!!"
     # This returns true if reg_pattern is found in  captured logs
    log_mess = LoggingCapture.find_event(context.log_capture, reg_pattern)
    assert log_mess == True

When I run behave with standard logging there are no logs in terminal (everything is captured by behave), and is displayed only in case of failed test.
With Loguru all logs are printed to the terminal even when running behave.

@Delgan
Copy link
Owner

Delgan commented Feb 17, 2021

Thanks for the example!

Could that be due to the default stderr handler? Maybe you should call logger.remove() right before logger.add() to make sure all existing handlers are removed.

@R2D34
Copy link
Author

R2D34 commented Feb 20, 2021

Sorry for the late answer Delgan.

I believe I am doing that already :(

@Delgan
Copy link
Owner

Delgan commented Feb 21, 2021

Sorry but I'm unable to reproduce your problem. 😬

For example I created a function using logger.info() and called it inside step_impl(), then the tests are passing correctly.

Maybe you need to specify level=0 while adding the PropagateHandler()?

@R2D34
Copy link
Author

R2D34 commented Feb 21, 2021

Hmmm, curious.

But, have you added any "monkey patching" to the log capturer from behave or have it worked out of the box for you?

@Delgan
Copy link
Owner

Delgan commented Feb 21, 2021

I'm using the snippets you provided. Here are the files:

# features/environment.py
from features.fixtures import log_capture_fix
from behave import use_fixture

def before_all(context):
    use_fixture(log_capture_fix, context)
    context.config.setup_logging()
# features/fixtures.py
import logging
from behave import fixture
from loguru import logger


@fixture
def log_capture_fix(context):
    class PropogateHandler(logging.Handler):
        def emit(self, record):
            logging.getLogger(record.name).handle(record)
    logger.remove()
    handler_id = logger.add(PropogateHandler(), format="{message} {extra}")
    yield context.log_capture
    logger.remove(handler_id)
# features/tutorial.feature
Feature: showing off behave

  Scenario: run a simple test
     Given we have behave installed
      When we implement a test
	  Then behave will test it for us!
	  Then The Hub callback is called
# features/steps/tutorial.py
from behave import *
from behave.log_capture import LoggingCapture


def foo():
    from loguru import logger
    logger.info("Super Mighty Log Event!!")


@given('we have behave installed')
def step_impl(context):
    pass

@when('we implement a test')
def step_impl(context):
    assert True is not False

@then('behave will test it for us!')
def step_impl(context):
    assert context.failed is False



@then("The Hub callback is called")
def step_impl(context):
    foo()
    reg_pattern = "Super Mighty Log Event!!"
     # This returns true if reg_pattern is found in  captured logs
    log_mess = LoggingCapture.find_event(context.log_capture, reg_pattern)
    assert log_mess == True

Result of behave command:

Feature: showing off behave # features/tutorial.feature:1

  Scenario: run a simple test        # features/tutorial.feature:3
    Given we have behave installed   # features/steps/tutorial.py:10 0.000s
    When we implement a test         # features/steps/tutorial.py:14 0.000s
    Then behave will test it for us! # features/steps/tutorial.py:18 0.000s
    Then The Hub callback is called  # features/steps/tutorial.py:24 0.000s

1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 0 skipped
4 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.001s

@R2D34
Copy link
Author

R2D34 commented Feb 21, 2021

@then("The Hub callback is called")
def step_impl(context):
    foo()
    reg_pattern = "Super Mighty Log Event!!"
     # This returns true if reg_pattern is found in  captured logs
    log_mess = LoggingCapture.

---->

log_mess = LoggingCapture.

Is this an error in copying the code or do you have it written like that (which I believe is unlikely since it should throw some error I guess).

@Delgan
Copy link
Owner

Delgan commented Feb 21, 2021

@Tsubanee Yep, sorry. Failed copy-paste. 🙂

@R2D34
Copy link
Author

R2D34 commented Feb 21, 2021

Tried again in case I missed something, but it still doesn't work.

Part of failing behave output:

    And Passes Health Check                                                     # api/features/steps/create_s.py:47 0.001s
      Traceback (most recent call last):
        File "/home/x/.virtualenvs/apiEndpoints/lib/python3.9/site-packages/behave/model.py", line 1329, in run
          match.run(runner.context)
        File "/home/x/.virtualenvs/apiEndpoints/lib/python3.9/site-packages/behave/matchers.py", line 98, in run
          self.func(context, *args, **kwargs)
        File "api/features/steps/create_s.py", line 54, in step_impl
          assert log_mess == True
      AssertionError
  <some ansible log output>
      Captured stdout:
      False
      False
      False
      False
      False
      False
      False
      False
      False
      False
      False
      False
      False
      False
      False
      False
      False
      
      Captured logging:
      INFO:filelock:Lock 139938437875600 acquired on /tmp/.bearers.yaml.lock
      INFO:filelock:Lock 139938437875600 released on /tmp/.bearers.yaml.lock

Btw. this is precisely how I use the logger (we have a team specific wrapper):

log.py

class InterceptHandler(logging.Handler):
    def emit(self, record):
        # Get corresponding Loguru level if it exists
        try:
            level = _logger.level(record.levelname).name
        except ValueError:
            level = record.levelno

        # Find caller from where originated the logged message
        frame, depth = logging.currentframe(), 2
        while frame.f_code.co_filename == logging.__file__:
            frame = frame.f_back
            depth += 1

        _logger.opt(depth=depth, exception=record.exc_info).log(
            level, record.getMessage(), extra={"tracking_id": "#"}
        )


def init(to_console=True, to_gcp=False, to_file=""):
    global _logger, _to_gcp

    # fmt = "{time:YYYY-MM-DD HH:mm:ss} | {level} | {extra[tracking_id]} | {message}"
    fmt = "{time:YYYY-MM-DD HH:mm:ss} | {level} | # | {message}"
    lvl = "INFO"
    diag = False

    # if already initialised, return
    if _logger:
        return

    if to_gcp:
        _logger = setup_gcp_logging()
        _to_gcp = True
    else:
        import loguru
        from loguru import logger

        _logger = logger
        if to_console or to_file:
            _logger.remove()

        if to_console:
            _logger.add(sys.stderr, format=fmt, level=lvl, diagnose=diag, enqueue=True)
        if to_file:
            _logger.add(to_file, format=fmt, level=lvl, diagnose=diag, enqueue=True)

        # intercept all standard logging
        logging.basicConfig(handlers=[InterceptHandler()], level=0)

def info(*args, **kwargs):
    init()
    kwargs = add_tracking_id(kwargs)
    return _logger.info(*args, **kwargs)

api_utils.py

def log_api_state(api_call_id: str, message: str):
    log.init()
    log_message = f"[{api_call_id}] {message}"
    log.info(log_message)

Is it alright if I use loguru.logger in the fixture if I have a separate variable _logger which is initialized to loguru.logger?

Maybe I am somehow misusing this, or missing something important? Lemme know if you have any other idea 🚀 .

@Delgan
Copy link
Owner

Delgan commented Feb 21, 2021

I don't think using loguru.logger or _logger should change anything, they represent the same object actually and use the same set of handlers. Maybe it's a problem with the order the fixture and initialization are executed?

I don't see anything specially wrong in your snippets, but it's hard to tell without knowledge of your code. Do you think it's possible to modify the tutorial.py file I provided in a way to introduce your configuration reproducing the problem?

@R2D34
Copy link
Author

R2D34 commented Feb 21, 2021

Yep, I ll try doing that.

@R2D34
Copy link
Author

R2D34 commented Feb 21, 2021

I recreated the scenario from my app (calling the api which has logs), and then added my wrapepr to the logging.
Apprenently the wrapper (log.py) is the problem, because if I just call loguru.logger.info(message) then it works properly.

Anyway, here is the full test setup that leads to failed test.

requirements.txt

behave==1.2.6
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
colorama==0.4.4
fastapi==0.63.0
h11==0.12.0
idna==2.10
loguru==0.5.3
parse==1.19.0
parse-type==0.5.2
pretty-errors==1.2.19
pydantic==1.7.3
requests==2.25.1
six==1.15.0
starlette==0.13.6
urllib3==1.26.3
uvicorn==0.13.4
#tutorial.py
# features/steps/tutorial.py

from behave import *
from behave.log_capture import LoggingCapture

from fastapi.testclient import TestClient
from funcs.main import app




@given("The caller is sync http client")
def step_impl(context):
    context.client = TestClient(app)




@when("We call the path")
def step_impl(context):
    context.name = "HULK"
    context.response = context.client.get(
        "/"+context.name
    )

@then("We make sure call worked")
def step_impl(context):
  assert context.response.status_code == 200
    
@then("We have Mighty Message in logs")
def step_impl(context):
    reg_pattern = f"Supper Mighty {context.name}"
     # This returns true if reg_pattern is found in  captured logs
    log_mess = LoggingCapture.find_event(context.log_capture, reg_pattern)
    assert log_mess == True
#environment.py
# features/environment.py
from fixtures import log_capture_fix
from behave import use_fixture


def before_all(context):
    use_fixture(log_capture_fix, context)
    context.config.setup_logging()
#features/tutorial.feature
# features/tutorial.feature
Feature: showing off behave

  Scenario: run a simple test
     Given The caller is sync http client
	  When We call the path
    Then We make sure call worked
    Then We have Mighty Message in logs
#funcs/log.py
import sys
import datetime
import logging


_logger = None




class InterceptHandler(logging.Handler):
    def emit(self, record):
        # Get corresponding Loguru level if it exists
        try:
            level = _logger.level(record.levelname).name
        except ValueError:
            level = record.levelno

        # Find caller from where originated the logged message
        frame, depth = logging.currentframe(), 2
        while frame.f_code.co_filename == logging.__file__:
            frame = frame.f_back
            depth += 1

        _logger.opt(depth=depth, exception=record.exc_info).log(
            level, record.getMessage(), extra={"tracking_id": "#"}
        )


def init(to_console=True, to_gcp=False, to_file=""):
    global _logger, _to_gcp

    # fmt = "{time:YYYY-MM-DD HH:mm:ss} | {level} | {extra[tracking_id]} | {message}"
    fmt = "{time:YYYY-MM-DD HH:mm:ss} | {level} | # | {message}"
    lvl = "INFO"
    diag = False

    # if already initialised, return
    if _logger:
        return


    else:
        import loguru
        from loguru import logger

        _logger = logger
        if to_console or to_file:
            _logger.remove()

        if to_console:
            _logger.add(sys.stderr, format=fmt, level=lvl, diagnose=diag, enqueue=True)
        if to_file:
            _logger.add(to_file, format=fmt, level=lvl, diagnose=diag, enqueue=True)

        # intercept all standard logging
        logging.basicConfig(handlers=[InterceptHandler()], level=0)


def add_tracking_id(kwargs):
    if "tracking_id" not in kwargs:
        kwargs["tracking_id"] = "#"
    return kwargs


def info(*args, **kwargs):
    init()
    kwargs = add_tracking_id(kwargs)
    return _logger.info(*args, **kwargs)


def exception(*args, **kwargs):
    init()
    kwargs = add_tracking_id(kwargs)
    return _logger.exception(*args, **kwargs)


def error(*args, **kwargs):
    init()
    kwargs = add_tracking_id(kwargs)
    return _logger.error(*args, **kwargs)


def debug(*args, **kwargs):
    init()
    kwargs = add_tracking_id(kwargs)
    return _logger.debug(*args, **kwargs)


if __name__ == "__main__":
    init()
    exception("test")
    info("<-- this has tracking", tracking_id="123")
#main.py
from fastapi import FastAPI, Path
from funcs.util import log_api

app = FastAPI()


@app.get("/{message}")
async def groot(message = Path(...)):
    log_api(f"Supper Mighty {message}")
    return {"message": message}
#func/util.py
from funcs.log import init, info


def log_api(message):
    init()
    info(message)
funcs/fixtures.py
# features/fixtures.py
import logging
from behave import fixture
from loguru import logger


@fixture
def log_capture_fix(context):
    class PropogateHandler(logging.Handler):
        def emit(self, record):
            logging.getLogger(record.name).handle(record)
    logger.remove()
    handler_id = logger.add(PropogateHandler(), format="{message} {extra}")
    yield context.log_capture
    logger.remove(handler_id)

Failed test output

Feature: showing off behave # features/tutorial.feature:2

  Scenario: run a simple test            # features/tutorial.feature:4
    Given The caller is sync http client # features/steps/tutorial.py:12 0.001s
    When We call the path                # features/steps/tutorial.py:19 0.041s
    Then We make sure call worked        # features/steps/tutorial.py:26 0.000s
    Then We have Mighty Message in logs  # features/steps/tutorial.py:30 0.001s
      Traceback (most recent call last):
        File "/home/alx/.virtualenvs/behave_loguru_test/lib/python3.9/site-packages/behave/model.py", line 1329, in run
          match.run(runner.context)
        File "/home/alx/.virtualenvs/behave_loguru_test/lib/python3.9/site-packages/behave/matchers.py", line 98, in run
          self.func(context, *args, **kwargs)
        File "features/steps/tutorial.py", line 35, in step_impl
          assert log_mess == True
      AssertionError
      
      Captured stderr:
      2021-02-21 15:55:08 | INFO | # | Supper Mighty HULK


CLEANUP-ERROR in cleanup_fixture: ValueError: There is no existing handler with id 1
Traceback (most recent call last):
  File "/home/x/.virtualenvs/behave_loguru_test/lib/python3.9/site-packages/behave/runner.py", line 222, in _do_cleanups
    cleanup_func()
  File "/home/x/.virtualenvs/behave_loguru_test/lib/python3.9/site-packages/behave/fixture.py", line 162, in cleanup_fixture
    next(func_it)           # CLEANUP-FIXTURE PART
  File "/home/x/Projects/projects/python/test_loguru_behave/funcs/fixtures.py", line 15, in log_capture_fix
    logger.remove(handler_id)
  File "/home/x/.virtualenvs/behave_loguru_test/lib/python3.9/site-packages/loguru/_logger.py", line 995, in remove
    raise ValueError("There is no existing handler with id %d" % handler_id) from None
ValueError: There is no existing handler with id 1

Failing scenarios:
  features/tutorial.feature:4  run a simple test

0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
3 steps passed, 1 failed, 0 skipped, 0 undefined
Took 0m0.043s

@Delgan
Copy link
Owner

Delgan commented Feb 21, 2021

Thanks for the reproducible example!

From what I understand, the PropagateHandler() is added first in the fixture before test execution. Then, the test is run and your init() function is called. In this function, you call logger.remove() and hence the PropagateHandler is, well, removed. That explains why the log messages are not captured by the LoggingCapture class.

Can you somehow refactor the code to initialize the logger before executing the tests?

@R2D34
Copy link
Author

R2D34 commented Feb 21, 2021

Yeah! Right that might be it 🚀 In hindsight it seems kinda obvious 🤦 . Anyway, thank you very much for helping out Delgan. I will fix it tomorrow to run in correct order during tests, and update here how it went (though I wouldn't expect further problems now :D)

UPDATE: Yep. Running the init() at the beggining fo the fixture solved the problem. Once again, thank you very much Delgan :D

@Delgan Delgan added the question Further information is requested label Mar 16, 2021
@Delgan
Copy link
Owner

Delgan commented Mar 16, 2021

I'm closing this issue since it's resolved, then. 👍

@Delgan Delgan closed this as completed Mar 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants