-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
pytest.fail in fixture results in error and not failure #5044
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
can you elaborate the use-case in which you know exactly what is wrong pytest in general consider any failure in the setup instead of the execution of a test a error instead of a failure (its considered a harsh issue when the creation of test conditions fails instead of the test) also please note, that if you know exactly why it breaks and its within a certain expectation, a pytest.xfail may communicate that exactly and in all cases |
I totally agree with you here, as I already mentioned there could be alternatives to solve this issue
Pytest generally considering any failure in the setup instead of the execution of a test an error looks good design to me as well but not when the user explicitly mentions pytest.fail in the fixture. Should not we give the user freedom to make it a failure rather than considering it an error by default ? 'pytest.fail' is the only way where we can give user this freedom. As I said, instead of pytest defining all fixtures failures should be considered as errors, why do not we simply let the user give a chance to decide on it through 'pytest.fail'? I cannot come up with any valid use case as of now as they can be solved through alternative ways like xfail etc. But I believe that should not suppress this bug. |
I agree with @cvasanth2707 here - using My use case is checking prerequisites in the fixture (e.g. that a required So in this case Minimal example: import pytest
@pytest.fixture
def fix():
pytest.fail("should cause FAILURE, not ERROR")
def test_fail_in_fixture(fix):
pass |
I think I remember some conversations about this. If I understand right this proposes that when The solution to that is to introduce another explicit step in the runtest protocol. But this runs into backwards compatibility problems because you can bet some plugins or test suites will break. So you could make it an sub-step of the call or teardown steps in the runtest protocol I guess. Finally if it's a new step, there's also the question of adding a At this point I think past discussions of this got stuck in decision paralysis... |
Maybe it helps if I give an actual use case example that drove me and my colleagues to use A specific use case that I'm not sure how to fix (yet ;)) is to verify that, when using Selenium, the system under test doesn't issue errors or other unwanted content to the browser's console log.
Judging a book by its cover, |
Hi @stiiin, For your case, perhaps writing a hookwrapper around Untested: @pytest.hooimpl(hookwrapper=True)
def pytest_runtest_call():
if check_logs_for_problems():
pytest.fail("log problems found")
yield
if check_logs_for_problems():
pytest.fail("log problems found") |
Here is my own use case: |
another use case: in migration from |
@yarikoptic i guess you have found pytest-timeout? It kind of addresses a slightly different problem as nose-timer's timeout but many people use it for that anyway and it achieves failing the tests if it can use a signal. It is very different from simply failing the test though as it really is about aborting execution more than about failing and continuing, the fact this is possible is more of a gotcha-rich side-effect of convenience. If you don't want to interrupt your tests and want a no-so-caviat-ridden way of simply failing tests that took to long you can build it with the |
Another use case for an expensive connection to an instrument that cannot report errors by itself:
|
Same problem here.
@pytest.fixture(scope="module")
def data(database):
return database.get_data()
def test_we_can_get_a_data(data):
... # This test does nothing but checks that fixture works. I want it to be failed but not errored
def test_data_format(data):
check_format(data)
def test_data_consistency(data):
check_consistency(data) |
Is there any new solution for this since 2022? My use case: I have a lot of tests. Some checks (asserts) are common across all tests. I want them to be done by default (automatically), so that user does not forget to do the check. User can disable the automatic checks in the test in those few cases that can't use automatic checks. To implement this, I'm using a fixture and do the automatic check in fixture teardown. So if the automatic check fails, I want it to be the same effect as if it was done inside the test - the test should Fail, not have an Error. |
pytest.xfail and pytest.skip when used in a fixture results in xfail and skip respectively but pytest.fail results in error and not failure.
Though there may be reasons as to why it should result in error rather than failure and though there could be alternatives to this, "pytest.fail" resulting in error looks like a bug.
When user uses pytest.fail explicitly in a fixture, he knows what he is doing and he expects it to fail rather than pytest automatically making it failure.
Version:
py37 installed: atomicwrites==1.3.0,attrs==19.1.0,colorama==0.4.1,more-itertools==7.0.0,pluggy==0.9.0,py==1.8.0,pytest==4.4.0,six==1.12.0
The text was updated successfully, but these errors were encountered: