-
-
Notifications
You must be signed in to change notification settings - Fork 3k
Fix stage caplog records not clear #10051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I am a newer, please give your comment, thanks :) |
17775a5 to
b0b313f
Compare
|
It seems like ci is blocked, I already forced push again but not useful. Could you help me to check? @nicoddemus |
src/_pytest/logging.py
Outdated
| def clear(self) -> None: | ||
| """Reset the list of log records and the captured log text.""" | ||
| self.handler.reset() | ||
| self._item.stash[caplog_records_key][self.handler.when] = self.records |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if done there, its already empty
the fix is incorrect as far as i can tell ,the correct fix location needs some extra details
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if done there, its already empty the fix is incorrect as far as i can tell ,the correct fix location needs some extra details
which object is already empty? I'm trying to understand it, thanks help!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if done there, its already empty the fix is incorrect as far as i can tell ,the correct fix location needs some extra details
after handler.reset(), handler.records object id is changed, it lost connection with item.stash[caplog_records_key][when], item.stash[caplog_records_key][when] is not empty
maybe use list.clear when caplog fixture clear the records is better?
class LogCaptureHandler(logging_StreamHandler):
def clear(self) -> None:
self.records.clear()
self.stream = StringIO()There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would destroy data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would destroy data
Sorry, I am confused about the destroy data, it's only used for caplog fixture, each running stage will reset new records. My idea is each running stage should always bind unique stash caplog records, as LoggingCaptureHandler is reused. If I missing something else?
|
@nicoddemus we have a little time bomb here - basically every handler reset ought to stash the outputs/records however we drop handler state and disconnect the handler state from the stash i beleive the internals need to change a bit, so that a handler can push new lists onto the stash, so the full set or the last used set can be requested this requires careful consideration as the log record stashing is incomplete, and a handler clearing will disconnect the details |
|
You mean Node._stash should hold lists and the last item? |
|
@nicoddemus exactly |
|
@nicoddemus this will also be a help for subtests, which should enable separation, just like capog sections |
| self.stream = StringIO() | ||
|
|
||
| def clear(self) -> None: | ||
| self.records.clear() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
without moving the cleared items to a storage we cannot perceed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you mean cleared items some place is useful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically clear removes all data, afterwards it's done, even from pytest internal store
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, caplog data will be discarded when clear, but another caplog_handler seems like storage all log records captured on stage
def _runtest_for(self, item: nodes.Item, when: str) -> Generator[None, None, None]:
"""Implement the internals of the pytest_runtest_xxx() hooks."""
with catching_logs(
self.caplog_handler,
level=self.log_level,
) as caplog_handler, catching_logs(
self.report_handler,
level=self.log_level,
) as report_handler:
caplog_handler.reset()
report_handler.reset()
item.stash[caplog_records_key][when] = caplog_handler.records
item.stash[caplog_handler_key] = caplog_handler
yield
log = report_handler.stream.getvalue().strip()
item.add_report_section(when, "log", log)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a hidden problem if not keep any cleared items in caplog object? I will keep learning the pytest frame to be familiar with the code logic!
|
|
||
| def clear(self) -> None: | ||
| self.records.clear() | ||
| self.stream = StringIO() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we actually use the streams, and if yes, how do we reconcile their usage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
caplog.text use stream to show content , and another handler is updated to "Captured log call" sections after stage finished. Since stream is not storage in item.stash, so I think it not a must unique.
Thank you for sharing your thoughtful idea, learning a lot
|
|
||
| caplog.clear() | ||
|
|
||
| assert caplog.get_records("call") == [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i believe we need a distinction between current records and all records
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if want all records, should call caplog.get_records("setup"/"call"/"teardown") three times.....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@EmptyRabbit what i meant is, that right now we throw records from call away on clear, those wont return
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RonnyPfannschmidt but those wont return records is useless for user, they hope to clear
when call caplog.get_records('setup') on call stage, it should be the same as get caplog.records on setup stage
if use self.records=[], will have bug like this:
def test_caplog_clear(caplog):
logging.error('captured one log')
assert caplog.messages == ['captured one log']
assert [x.getMessage() for x in caplog.get_records('call')] == ['captured one log']
caplog.clear()
logging.error('captured two log')
assert caplog.messages == ['captured two log']
assert [x.getMessage() for x in caplog.get_records('call')] == ['captured two log']
# ['captured one log'] != ['captured two log'], second log not be collected to item.stashif we need keep all handler cleared records, it seems like a new feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, as im looking at this now again, it seems that i messed up
i missed the relation between the caplog handler and the report handler
so the thing i thought was missing, was in reality supported by the other handler
as such i beleive my concern does no longer stand and the new feature is a followup idea
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
approved,Exciting!
find wrong people mentioned to at last comment , updated. sorry..
|
reopen to run checks |
Co-authored-by: Ronny Pfannschmidt <opensource@ronnypfannschmidt.de>
| @@ -0,0 +1 @@ | |||
| Ensure ``caplog.get_records(when)`` returns current/correct data after invoking ``caplog.clear()``. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took the liberty of fixing capitalization and punctuation. 😁
Closes #9877
Add "when" property toLogCaptureHandlerin_pytest/logging.pyto fix caplog stage records not reset aftercaplog.clear()caplog_handler.reset()set LogCaptureHandler.records a new list object, which lost bind to item.stashAs each running stage use one caplog_handler object and it's records object reference in item.stash
item.stash[caplog_records_key][when] = caplog_handler.records. So after clear, any log records can not be captured to item.stash