-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
xfail doesn't actually run the test (but the docs promise that it does) #810
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The documentation you mention actually refers to the Consider this example: # contents of test_foo.py
import pytest
def test_xfail():
print
print ' xfail-before'
pytest.xfail('xfail')
print ' xfail-after'
@pytest.mark.xfail
def test_xfail_mark():
print
print ' mark-before'
print ' mark-after'
assert 0
As you can see, the test with the mark runs completely, but the one with the def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
__tracebackhide__ = True
raise XFailed(reason) While this could be changed to just let the test continue execution and report it as xfail, I don't think that's an option as it would unfortunately change existing behavior and might break existing test suites. Regardless if the behavior should change or not, I think that the documentation should explicitly mention this difference. A PR for the docs would be very welcome! 😄 |
I think the behavior should be changed. Many reasons:
|
You raise some valid points!
I agree that the documentation could be improved (that's why I mention a PR would be welcomed to fix that), and I agree that it would be better for the two xfail forms to be consistent with each other.
I agree that it is low-risk, and in my experience
They do the same thing, but the semantics are different: skip means a test should not be executed in some expected circumstances (a Windows-only test being skipped on Linux for example). xfail on the other hand means that a test is failing always/sometimes but it shouldn't, like a test exercising a reported bug or a flaky test. So, after the test suite finishes, skips are OK, but xfails should to be worked-on eventually or signal red-flags that should be checked out eventually. All in all, I don't lean too strongly on either side of the issue and of course I welcome the discussion. 😄 Perhaps others can chip in? Any thoughts on this @hpk42, @flub, @bubenkoff, @The-Compiler...? (Btw, please keep in mind that I only closed the issue because usually people forget to close their own issues, at no point I wanted to put an end to the discussion, hope you understand.) |
I'm not sure really - I agree they should be consistent, but I also see the danger of breaking existing code. So I searched for code using def test_record_to_file(camera, previewing, mode, filenames_format_options):
...
if resolution[1] > 480 and format == 'mjpeg':
pytest.xfail('Locks up camera')
camera.start_recording(filename1, **options) So it seems people really rely on that behavior, and it was easy to find an example to prove it - so I agree it should be clarified in the documentation, but not changed. |
If it is indeed not changed, I would like to request an xfail-like imperative thing that works like the marker. |
Maybe adding a If others agree with the idea, would you like to submit a PR? I'm sure it'd be much appreciated, and we're happy to help if you get stuck. |
I took a look at the code, and I must admit that I wasn't able to make much sense of it. (In particular how the marking happens and where an xfail mark takes effect.) |
I like this suggestion, but I'm having trouble figuring out how one would implement it... Currently when But how to implement this in a way to "mark" the test it xfailed, but still continue execution after the call? Mind that |
A thread-local variable maybe that tracks what test request is active? (This would still permit recursion with some care, but no other form of concurrency.) |
I have to admit I didn't think about the implementation yet 😆. What @inducer proposed or ugly hacks using |
marks make, the functions raise exceptions, i think that is consistent enough |
this issue is has died down after getting a bit out of hand, we should open to the point issues for the documentation fix and close off the rest after a wait period |
Also did a general review of the document to improve the flow Fix pytest-dev#810
Also did a general review of the document to improve the flow Fix pytest-dev#810
From http://pytest.org/latest/skipping.html#mark-a-test-function-as-expected-to-fail:
Consider this snippet:
This produces an "x" in the pytest output. If the test were actually run beyond the
xfail
as documented, then the interpreter should have just exited. Whatxfail
seems to do is terminate the execution of the test. But that's inconvenient. Suppose you have a test that'll always raise an exception. You'd have to wrap the test in a try block and callxfail
in afinally
clause. Instead, pytest should mark the test as expecting to fail and continue on.The text was updated successfully, but these errors were encountered: