-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
pytest.xfail should work like pytest.mark.xfail #7071
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This is valid issue impeding my efforts as well. With a call to pytest.xfail function, I do not see the test being executed at all. The test is just skipped, which is not the intended behavior after a pytest.xfail call. |
This can be accomplished via the request fixture. I don't know why
1 passes, 2 fails (xpass strict), 3 xfails, and 4 fails. |
imperative xfail is intended for any place in a test that discovers i can't continue and its okish it intentionally mirrors the imperative skip there is no plan to change its meaning so far i haven't seen anyone demonstrate a example use-case for a correct delayed one |
Thanks for the remarks @RonnyPfannschmidt
Understood, but imperative skip also mirrors mark.skip, which is not the case for xfail. I think that's what users might find surprising. Just to be clear, while I personally think the pytest.xfail behavior is somewhat unexpected, I am more than happy that the functionality exists via
In pandas we have a parameterized fixture that produces e.g. all of our transformation kernels for groupby. This fixture is used by a variety of tests for which some kernels may not work, depending on the test. We would like our CI to fail if these tests start working in such cases. While we were using |
Pytest.params can be used to bind marks to specific parameters, for more details please link the code implementing your particular use case |
Thanks @RonnyPfannschmidt! Here is an example from pandas: But if it's easier to grok, here is an abstracted minimal example:
We have tests to ensure that the list |
@rhshadrach so if we had We have a plan for that, but im not going to search the issue for that tonight |
Issue is #7395; assuming that can handle multiple fixtures and a combination of fixtures and @pytest.mark.parametrize then yes, absolutely. Would I be correct in saying that your view is that marking using a decorator is preferred because then the expectations are set during test collection rather than test execution? |
Thanks for looking it up I Strongly prefer the declarative way as it implies the Metadata is available after collection |
I am unable to use @pytest.mark.xfail decorator in two scenarios:
I am using pytest.skip function in obvious situations that are not supported, and pytest.xfail function where test is expected to fail due to a transient issue(buggy feature, partially built feature, ...) or a situation where one must ensure that test fails for the right reason and not cause any adverse impact. The essence is to be able to use pytest.xfail function in similar situations as one would apply @pytest.mark.xfail decorator.
It has been a real problem for us by not having the capability to run the tests fully and mark it as expected fail. This has caused certain real issues go unnoticed that would have been caught sooner. PS: Tried my best to illustrate the use case :-) |
This certainly helps. Thanks a lot. |
Adding the |
Certainly! That should help, anyone looking for this functionality. My observation, 'pytest.skip' and 'pytest.xfail' functions both seem to do the same thing(wrt functionality) at the moment, unlike their respective decorator versions. Thanks |
Not sure I follow, can you clarify that? The decorated versions produce different outcomes in the terminal (skipped vs failed tests). |
@nicoddemus i believe the issue is taken with xfail having |
Ahh right, thanks! |
pytest.xfail
should do exactly whatpytest.mark.xfail
does.pytest.xfail
acts likepytest.skip
for no good reason.pytest.xfail
should let the test run then mark it as XFAIL or XPASS.The current
pytest.xfail
already exists aspytest.fail
and new delayedpytest.xfail
should developed in place.#7060
#810
The text was updated successfully, but these errors were encountered: