Skip to content

pytest.fail in fixture results in error and not failure #5044

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
cvasanth2707 opened this issue Apr 4, 2019 · 12 comments
Open

pytest.fail in fixture results in error and not failure #5044

cvasanth2707 opened this issue Apr 4, 2019 · 12 comments
Assignees
Labels
topic: fixtures anything involving fixtures directly or indirectly type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature

Comments

@cvasanth2707
Copy link

pytest.xfail and pytest.skip when used in a fixture results in xfail and skip respectively but pytest.fail results in error and not failure.

Though there may be reasons as to why it should result in error rather than failure and though there could be alternatives to this, "pytest.fail" resulting in error looks like a bug.

When user uses pytest.fail explicitly in a fixture, he knows what he is doing and he expects it to fail rather than pytest automatically making it failure.

Version:

py37 installed: atomicwrites==1.3.0,attrs==19.1.0,colorama==0.4.1,more-itertools==7.0.0,pluggy==0.9.0,py==1.8.0,pytest==4.4.0,six==1.12.0

@RonnyPfannschmidt
Copy link
Member

can you elaborate the use-case in which you know exactly what is wrong

pytest in general consider any failure in the setup instead of the execution of a test a error instead of a failure (its considered a harsh issue when the creation of test conditions fails instead of the test)

also please note, that if you know exactly why it breaks and its within a certain expectation, a pytest.xfail may communicate that exactly and in all cases

@cvasanth2707
Copy link
Author

also please note, that if you know exactly why it breaks and its within a certain expectation, a pytest.xfail may communicate that exactly and in all cases

I totally agree with you here, as I already mentioned there could be alternatives to solve this issue

pytest in general consider any failure in the setup instead of the execution of a test a error instead of a failure (its considered a harsh issue when the creation of test conditions fails instead of the test)

Pytest generally considering any failure in the setup instead of the execution of a test an error looks good design to me as well but not when the user explicitly mentions pytest.fail in the fixture.

Should not we give the user freedom to make it a failure rather than considering it an error by default ? 'pytest.fail' is the only way where we can give user this freedom.

As I said, instead of pytest defining all fixtures failures should be considered as errors, why do not we simply let the user give a chance to decide on it through 'pytest.fail'?

I cannot come up with any valid use case as of now as they can be solved through alternative ways like xfail etc. But I believe that should not suppress this bug.

@Zac-HD Zac-HD added topic: fixtures anything involving fixtures directly or indirectly type: question general question, might be closed after 2 weeks of inactivity labels Apr 6, 2019
@blueyed
Copy link
Contributor

blueyed commented Apr 17, 2019

I agree with @cvasanth2707 here - using pytest.fail is pretty explicit.

My use case is checking prerequisites in the fixture (e.g. that a required
service running), and I would rather see the test failing than it being
considered as an error, since this usually indicates something
unexpected/abnormal/unhandled.

So in this case xfail does not really make sense, since I should rather
start/check the service.

Minimal example:

import pytest


@pytest.fixture
def fix():
    pytest.fail("should cause FAILURE, not ERROR")


def test_fail_in_fixture(fix):
    pass

@nicoddemus nicoddemus added type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature and removed type: question general question, might be closed after 2 weeks of inactivity labels May 1, 2019
@flub flub self-assigned this Jul 15, 2020
@flub
Copy link
Member

flub commented Jul 15, 2020

I think I remember some conversations about this. If I understand right this proposes that when pytest.fail.Exception is raised in either the setup or teardown that it is considered a test failure and not an error. The problem with that is that the pytest.fail() call now inhibits other teardown behaviour, so you need to be really deliberate about how your invoke pytest.fail() from your teardown.

The solution to that is to introduce another explicit step in the runtest protocol. But this runs into backwards compatibility problems because you can bet some plugins or test suites will break. So you could make it an sub-step of the call or teardown steps in the runtest protocol I guess.

Finally if it's a new step, there's also the question of adding a request.addverifier() (or a better name) which would probably make sense.

At this point I think past discussions of this got stuck in decision paralysis...

@stiiin
Copy link

stiiin commented Dec 1, 2020

Maybe it helps if I give an actual use case example that drove me and my colleagues to use pytest.fail from fixtures.

A specific use case that I'm not sure how to fix (yet ;)) is to verify that, when using Selenium, the system under test doesn't issue errors or other unwanted content to the browser's console log.

  • Ideally, we'd be notified of such errors as we manipulate the WebDriver so that we can easily correlate it to our test behaviour, but 1) console logging may happen asynchronously, 2) some log entries are nearly unavoidable, so we need to be able to filter it, and 3) I don't think you can do that with Selenium.
  • The second best option would have us sprinkle assert web_driver.bad_console_log_entries == EMPTY_BROWSER_LOG everywhere and have a tryfirst pytest_assertrepr_compare hook for the EMPTY_BROWSER_LOG sentinel. Still not ideal, though it gives us the ability to "checkpoint" the console log if we want to.
  • It might be an option to wrap every WebDriver method to make that assertion, but then we'd lose assertion rewriting.
  • The option we're going for now is to use a wrapper fixture (as narrowly scoped as possible) for the WebDriver that validates the console log upon teardown. If we find something unacceptable, pytest.fail the teardown.

Judging a book by its cover, request.addverifier sounds exactly like what we're looking for. It's not that the tests broke the fixture (which would be an error indeed), but that the fixture shows traces of improper use.

@nicoddemus
Copy link
Member

Hi @stiiin,

For your case, perhaps writing a hookwrapper around pytest_runtest_call might fit the bill for you?

Untested:

@pytest.hooimpl(hookwrapper=True)
def pytest_runtest_call():
    if check_logs_for_problems():
        pytest.fail("log problems found")
    yield
    if check_logs_for_problems():
        pytest.fail("log problems found")

@nyh
Copy link

nyh commented Feb 15, 2021

can you elaborate the use-case in which you know exactly what is wrong

pytest in general consider any failure in the setup instead of the execution of a test a error instead of a failure (its considered a harsh issue when the creation of test conditions fails instead of the test)

Here is my own use case:
I have a test suite for a server, and the fixture makes a connection to the server. I want to be sure that the test which caused this crash "F"ails - while all the following tests will have "E"rrors because they couldn't connect. So I want to have the fixture's teardown check that the server is still alive, and if not pytest.fail(). But I don't want this explicit pytest.fail() call to be converted to an Error. I also don't want the fail() to be converted to an xfail just because the test was tagged with xfail, and I'm even less sure on how to do that.

@yarikoptic
Copy link

another use case: in migration from nose to pytest for datalad/datalad#6273 @jwodder have not found a replacement for nose-timer which we use to guarantee that no test exceeds specified runtime on CI. Logically, I was quick to suggest to write pytest fixture which would do timing for the execution of the test. Well -- and here we hit this 2 year old issue that we cannot fail the test from within fixture teardown.
But may be there is an alternative way or some pytest plugin which could help us achieve desired testing?

@flub
Copy link
Member

flub commented Dec 6, 2021

@yarikoptic i guess you have found pytest-timeout? It kind of addresses a slightly different problem as nose-timer's timeout but many people use it for that anyway and it achieves failing the tests if it can use a signal. It is very different from simply failing the test though as it really is about aborting execution more than about failing and continuing, the fact this is possible is more of a gotcha-rich side-effect of convenience.

If you don't want to interrupt your tests and want a no-so-caviat-ridden way of simply failing tests that took to long you can build it with the pytest_runtest* hooks: a pytest_runtest_call hookwrapper (or whatever appropriate timing granularity) to do the actual timing and pytest_runtest_makereport to set the status to failed. probably smuggling the timeing info via the item is the nicest way.

@jpsecher
Copy link

jpsecher commented Jan 26, 2022

Another use case for an expensive connection to an instrument that cannot report errors by itself:

@pytest.fixture(scope='function')
def instrument():
    dut = TheInstrument.instance().visa
    yield dut
    check_instrument_for_errors()

def check_instrument_for_errors():
    errors = get_instrument_errors()
    if errors:
        # This results in an Error, not a Fail as you might expect, see
        # https://github.com/pytest-dev/pytest/issues/5044
        pytest.fail(f'Error from instrument: {repr(errors)}')

def get_instrument_errors():
    dut = TheInstrument.instance().visa
    errors = dut.query('syst:err:all?')
    if errors == '0, "No error"':
        return None
    return errors

@erwinkinn
Copy link

Same problem here.
My use case:

  • Have an extremely expensive call that provides me with a huge amount data.
  • Wanna split my tests to check an output from the different perspectives:
    data format is ok
    data is consistent
    data is correct
  • Want to have a test that checks that the data itself can be at least obtained.
    So my suite is smth like:
@pytest.fixture(scope="module")
def data(database):
    return database.get_data()

def test_we_can_get_a_data(data):
    ...  # This test does nothing but checks that fixture works. I want it to be failed but not errored

def test_data_format(data):
    check_format(data)

def test_data_consistency(data):
    check_consistency(data)

@pkrukp
Copy link

pkrukp commented Feb 25, 2025

Is there any new solution for this since 2022?

My use case: I have a lot of tests. Some checks (asserts) are common across all tests. I want them to be done by default (automatically), so that user does not forget to do the check. User can disable the automatic checks in the test in those few cases that can't use automatic checks.

To implement this, I'm using a fixture and do the automatic check in fixture teardown. So if the automatic check fails, I want it to be the same effect as if it was done inside the test - the test should Fail, not have an Error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: fixtures anything involving fixtures directly or indirectly type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Projects
None yet
Development

No branches or pull requests