-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Cannot use python logging without maintaining all records in caplog #8307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I believe the first line in the caplog documentation already addresses this: https://docs.pytest.org/en/stable/logging.html#caplog-fixture
|
I can control what gets captured by caplog. But, I cannot do this independently of other handlers to achieve the following:
This is easy outside of pytest, but with caplog I can't do it. Here is my setup: In pytest.ini:
Result:
If I add a fixture that does: log_names = [name for name in logging.root.manager.loggerDict]
for name in log_names:
caplog.set_level(logging.INFO, logger=name) Result:
|
Any update here? The ask is to be able to log debug logs to file, info logs to terminal, while not leaking memory by keeping every debug log in caplog. |
similar problem here - any update regarding this issue? ^^ |
To reduce pytest's memory consumption we currently settled for below approach. def pytest_runtest_logreport(report: pytest.TestReport) -> None:
"""Drop captured logs of successful test phases to reduce pytest's excessive memory usage.
.. warning :: No extended testing regarding possible side effects was done.
This is a workaround for an intrinsic pytest memory leak, see relevant issues:
- https://github.com/pytest-dev/pytest/issues/8307
- https://github.com/pytest-dev/pytest/discussions/11704
- https://github.com/pytest-dev/pytest/issues/3329
- https://github.com/pytest-dev/pytest/issues/9215
`prefix` taken from https://github.com/pytest-dev/pytest/blob/8a410d0ba60bca7073bf18a1503c59a3b5445780/src/_pytest/reports.py#L118-L125
This hook gets called for every `pytest_runtest_protocol` phase, i.e. `setup`, `call`, `teardown`.
Since we do not directly delete the private
[`item._report_sections`](https://github.com/pytest-dev/pytest/blob/c967d508d62ad9e073d495ddfdca437188f2283e/src/_pytest/reports.py#L369-L370)
to hopefully be "a bit safer" - the `report.sections` are appended for each consecutive hook
call. This means that in later phases all the (log) sections of previous phases will be present
again and we thus remove up to three sections without a warning.
""" # noqa: E501 # allow long url
# TODO: test if it would be sufficient to only drop the sections in the teardown phase
if report.passed and report.caplog != "":
prefix = "Captured log"
sections_to_drop = []
for section in report.sections:
name = section[0]
if name.startswith(prefix):
sections_to_drop.append(section)
if len(sections_to_drop) > 3:
log.debug(
f"Removing {sections_to_drop=} from captured logs of passed {report.nodeid=}."
)
warnings.warn(
RuntimeWarning(
f"Dropping more than the maximum expected three sections "
f"({', '.join(section_name for section_name, _ in sections_to_drop)}) "
f"from the pytest report.sections object. This might be caused by "
f"other pytest plugins and might in turn cause problems for those."
),
stacklevel=2,
)
for section_to_drop in sections_to_drop:
report.sections.remove(section_to_drop) |
If I understand correctly, @lovetheguitar 's solution only works if you have multiple smaller tests, where setup, call and teardown will happen. In my use case, I have one long-running test that will eventually run out of memory due to the caplog. Is there any possible solution for that case? |
At the sprint a plugin was started for mid test pruning I didn't keep Tabs |
@ThermodynamicBeta yes indeed, this hack above, as well as the plugin we started at the end of the sprint will only work for a test suite consisting of multiple tests. The plugin is local only still, but if people are interested that might motivate us to clean up things and get it out faster. @RonnyPfannschmidt We used I don’t think there is much to expect from pytest or a plugin in the case of single long running test though. |
I recall discussing fixture to provide mid test culling at choose points |
I made a workaround with some help from some community members. Adding this to a test makes it so logging has a capped maximum effect on memory, with an obvious downside of only saving the last n=100 messages for the post-test summary. from collections import deque
import logging
class DequeStringIO:
def __init__(self, bound: int) -> None:
self.buf = deque(maxlen=bound)
def write(self, s: str) -> int:
self.buf.append(s)
return len(s)
def getvalue(self) -> str:
return "".join(self.buf)
for handler in logging.getLogger().handlers:
if "LogCaptureHandler" in str(type(handler)):
# there are two other handlers that don't appreciate having their stream overwritten
handler.records = deque(maxlen=100)
handler.stream = DequeStringIO(100) |
FWIW, instead of the hacky if "LogCaptureHandler" in str(type(handler)): you might want to do a way more direct if isinstance(handler, _pytest.logging.LogCaptureHandler): (yes, it's a private import - but you're already relying on private naming details to begin with!) |
I couldn't figure out how to import _pytest, how do you do that? |
Like any other module, the |
Description
I'd like to be able to use python's built-in logging, specifically to stream logging records to CLI and file, without having all log records stored in memory in the caplog fixture.
The reason I don't want the records to be stored in caplog is that this is essentially a forced memory leak. In the use-case where I discovered this, all DEBUG-level logging records were being stored persistently in caplog even though my CLI logging level was INFO, file handler level was DEBUG. The records were being streamed to file as expected, but I don't want them kept in memory. This was a very long-running test, the accumulating DEBUG records eventually resulted in a memory error.
Maybe I just don't know how to use caplog/logging in such a way that I can handle my logging streams as desired without keeping the log records in memory. If that's the case, would it be possible to get that clarification?
Environment
Example
The text was updated successfully, but these errors were encountered: