Skip to content

Commit 5d785e4

Browse files
Merge pull request #2454 from nicoddemus/xfail-docs
Make it clear that pytest.xfail stops the test
2 parents cca4de2 + 409d2f1 commit 5d785e4

File tree

2 files changed

+121
-119
lines changed

2 files changed

+121
-119
lines changed

changelog/810.doc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Make it clear that ``pytest.xfail`` stops test execution at the calling point and improve overall flow of the ``skipping`` docs.

doc/en/skipping.rst

Lines changed: 120 additions & 119 deletions
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,17 @@
55
Skip and xfail: dealing with tests that cannot succeed
66
=====================================================================
77

8-
If you have test functions that cannot be run on certain platforms
9-
or that you expect to fail you can mark them accordingly or you
10-
may call helper functions during execution of setup or test functions.
8+
You can mark test functions that cannot be run on certain platforms
9+
or that you expect to fail so pytest can deal with them accordingly and
10+
present a summary of the test session, while keeping the test suite *green*.
1111

12-
A *skip* means that you expect your test to pass unless the environment
13-
(e.g. wrong Python interpreter, missing dependency) prevents it to run.
14-
And *xfail* means that your test can run but you expect it to fail
15-
because there is an implementation problem.
12+
A **skip** means that you expect your test to pass only if some conditions are met,
13+
otherwise pytest should skip running the test altogether. Common examples are skipping
14+
windows-only tests on non-windows platforms, or skipping tests that depend on an external
15+
resource which is not available at the moment (for example a database).
16+
17+
A **xfail** means that you expect a test to fail for some reason.
18+
A common example is a test for a feature not yet implemented, or a bug not yet fixed.
1619

1720
``pytest`` counts and lists *skip* and *xfail* tests separately. Detailed
1821
information about skipped/xfailed tests is not shown by default to avoid
@@ -26,8 +29,8 @@ corresponding to the "short" letters shown in the test progress::
2629
.. _skipif:
2730
.. _`condition booleans`:
2831

29-
Marking a test function to be skipped
30-
-------------------------------------------
32+
Skipping test functions
33+
-----------------------
3134

3235
.. versionadded:: 2.9
3336

@@ -40,10 +43,23 @@ which may be passed an optional ``reason``:
4043
def test_the_unknown():
4144
...
4245
46+
47+
Alternatively, it is also possible to skip imperatively during test execution or setup
48+
by calling the ``pytest.skip(reason)`` function:
49+
50+
.. code-block:: python
51+
52+
def test_function():
53+
if not valid_config():
54+
pytest.skip("unsupported configuration")
55+
56+
The imperative method is useful when it is not possible to evaluate the skip condition
57+
during import time.
58+
4359
``skipif``
4460
~~~~~~~~~~
4561

46-
.. versionadded:: 2.0, 2.4
62+
.. versionadded:: 2.0
4763

4864
If you wish to skip something conditionally then you can use ``skipif`` instead.
4965
Here is an example of marking a test function to be skipped
@@ -55,24 +71,20 @@ when run on a Python3.3 interpreter::
5571
def test_function():
5672
...
5773

58-
During test function setup the condition ("sys.version_info >= (3,3)") is
59-
checked. If it evaluates to True, the test function will be skipped
60-
with the specified reason. Note that pytest enforces specifying a reason
61-
in order to report meaningful "skip reasons" (e.g. when using ``-rs``).
62-
If the condition is a string, it will be evaluated as python expression.
74+
If the condition evaluates to ``True`` during collection, the test function will be skipped,
75+
with the specified reason appearing in the summary when using ``-rs``.
6376

64-
You can share skipif markers between modules. Consider this test module::
77+
You can share ``skipif`` markers between modules. Consider this test module::
6578

6679
# content of test_mymodule.py
67-
6880
import mymodule
6981
minversion = pytest.mark.skipif(mymodule.__versioninfo__ < (1,1),
7082
reason="at least mymodule-1.1 required")
7183
@minversion
7284
def test_function():
7385
...
7486

75-
You can import it from another test module::
87+
You can import the marker and reuse it in another test module::
7688

7789
# test_myothermodule.py
7890
from test_mymodule import minversion
@@ -85,16 +97,15 @@ For larger test suites it's usually a good idea to have one file
8597
where you define the markers which you then consistently apply
8698
throughout your test suite.
8799

88-
Alternatively, the pre pytest-2.4 way to specify :ref:`condition strings
89-
<string conditions>` instead of booleans will remain fully supported in future
90-
versions of pytest. It couldn't be easily used for importing markers
91-
between test modules so it's no longer advertised as the primary method.
100+
Alternatively, you can use :ref:`condition strings
101+
<string conditions>` instead of booleans, but they can't be shared between modules easily
102+
so they are supported mainly for backward compatibility reasons.
92103

93104

94105
Skip all test functions of a class or module
95-
---------------------------------------------
106+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
96107

97-
You can use the ``skipif`` decorator (and any other marker) on classes::
108+
You can use the ``skipif`` marker (as any other marker) on classes::
98109

99110
@pytest.mark.skipif(sys.platform == 'win32',
100111
reason="does not run on windows")
@@ -103,26 +114,68 @@ You can use the ``skipif`` decorator (and any other marker) on classes::
103114
def test_function(self):
104115
"will not be setup or run under 'win32' platform"
105116

106-
If the condition is true, this marker will produce a skip result for
107-
each of the test methods.
117+
If the condition is ``True``, this marker will produce a skip result for
118+
each of the test methods of that class.
108119

109-
If you want to skip all test functions of a module, you must use
120+
If you want to skip all test functions of a module, you may use
110121
the ``pytestmark`` name on the global level:
111122

112123
.. code-block:: python
113124
114125
# test_module.py
115126
pytestmark = pytest.mark.skipif(...)
116127
117-
If multiple "skipif" decorators are applied to a test function, it
128+
If multiple ``skipif`` decorators are applied to a test function, it
118129
will be skipped if any of the skip conditions is true.
119130

120131
.. _`whole class- or module level`: mark.html#scoped-marking
121132

133+
134+
Skipping on a missing import dependency
135+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
136+
137+
You can use the following helper at module level
138+
or within a test or test setup function::
139+
140+
docutils = pytest.importorskip("docutils")
141+
142+
If ``docutils`` cannot be imported here, this will lead to a
143+
skip outcome of the test. You can also skip based on the
144+
version number of a library::
145+
146+
docutils = pytest.importorskip("docutils", minversion="0.3")
147+
148+
The version will be read from the specified
149+
module's ``__version__`` attribute.
150+
151+
Summary
152+
~~~~~~~
153+
154+
Here's a quick guide on how to skip tests in a module in different situations:
155+
156+
1. Skip all tests in a module unconditionally:
157+
158+
.. code-block:: python
159+
160+
pytestmark = pytest.mark.skip('all tests still WIP')
161+
162+
2. Skip all tests in a module based on some condition:
163+
164+
.. code-block:: python
165+
166+
pytestmark = pytest.mark.skipif(sys.platform == 'win32', 'tests for linux only')
167+
168+
3. Skip all tests in a module if some import is missing:
169+
170+
.. code-block:: python
171+
172+
pexpect = pytest.importorskip('pexpect')
173+
174+
122175
.. _xfail:
123176

124-
Mark a test function as expected to fail
125-
-------------------------------------------------------
177+
XFail: mark test functions as expected to fail
178+
----------------------------------------------
126179

127180
You can use the ``xfail`` marker to indicate that you
128181
expect a test to fail::
@@ -135,6 +188,29 @@ This test will be run but no traceback will be reported
135188
when it fails. Instead terminal reporting will list it in the
136189
"expected to fail" (``XFAIL``) or "unexpectedly passing" (``XPASS``) sections.
137190

191+
Alternatively, you can also mark a test as ``XFAIL`` from within a test or setup function
192+
imperatively:
193+
194+
.. code-block:: python
195+
196+
def test_function():
197+
if not valid_config():
198+
pytest.xfail("failing configuration (but should work)")
199+
200+
This will unconditionally make ``test_function`` ``XFAIL``. Note that no other code is executed
201+
after ``pytest.xfail`` call, differently from the marker. That's because it is implemented
202+
internally by raising a known exception.
203+
204+
Here's the signature of the ``xfail`` **marker** (not the function), using Python 3 keyword-only
205+
arguments syntax:
206+
207+
.. code-block:: python
208+
209+
def xfail(condition=None, *, reason=None, raises=None, run=True, strict=False):
210+
211+
212+
213+
138214
``strict`` parameter
139215
~~~~~~~~~~~~~~~~~~~~
140216

@@ -200,18 +276,19 @@ even executed, use the ``run`` parameter as ``False``:
200276
def test_function():
201277
...
202278
203-
This is specially useful for marking crashing tests for later inspection.
279+
This is specially useful for xfailing tests that are crashing the interpreter and should be
280+
investigated later.
204281

205282

206-
Ignoring xfail marks
207-
~~~~~~~~~~~~~~~~~~~~
283+
Ignoring xfail
284+
~~~~~~~~~~~~~~
208285

209286
By specifying on the commandline::
210287

211288
pytest --runxfail
212289

213290
you can force the running and reporting of an ``xfail`` marked test
214-
as if it weren't marked at all.
291+
as if it weren't marked at all. This also causes ``pytest.xfail`` to produce no effect.
215292

216293
Examples
217294
~~~~~~~~
@@ -245,16 +322,6 @@ Running it with the report-on-xfail option gives this output::
245322
246323
======= 7 xfailed in 0.12 seconds ========
247324

248-
xfail signature summary
249-
~~~~~~~~~~~~~~~~~~~~~~~
250-
251-
Here's the signature of the ``xfail`` marker, using Python 3 keyword-only
252-
arguments syntax:
253-
254-
.. code-block:: python
255-
256-
def xfail(condition=None, *, reason=None, raises=None, run=True, strict=False):
257-
258325

259326

260327
.. _`skip/xfail with parametrize`:
@@ -263,73 +330,29 @@ Skip/xfail with parametrize
263330
---------------------------
264331

265332
It is possible to apply markers like skip and xfail to individual
266-
test instances when using parametrize::
333+
test instances when using parametrize:
334+
335+
.. code-block:: python
267336
268337
import pytest
269338
270339
@pytest.mark.parametrize(("n", "expected"), [
271340
(1, 2),
272-
pytest.mark.xfail((1, 0)),
273-
pytest.mark.xfail(reason="some bug")((1, 3)),
341+
pytest.param(1, 0, marks=pytest.mark.xfail),
342+
pytest.param(1, 3, marks=pytest.mark.xfail(reason="some bug")),
274343
(2, 3),
275344
(3, 4),
276345
(4, 5),
277-
pytest.mark.skipif("sys.version_info >= (3,0)")((10, 11)),
346+
pytest.param(10, 11, marks=pytest.mark.skipif(sys.version_info >= (3, 0), reason="py2k")),
278347
])
279348
def test_increment(n, expected):
280349
assert n + 1 == expected
281350
282351
283-
Imperative xfail from within a test or setup function
284-
------------------------------------------------------
285-
286-
If you cannot declare xfail- of skipif conditions at import
287-
time you can also imperatively produce an according outcome
288-
imperatively, in test or setup code::
289-
290-
def test_function():
291-
if not valid_config():
292-
pytest.xfail("failing configuration (but should work)")
293-
# or
294-
pytest.skip("unsupported configuration")
295-
296-
Note that calling ``pytest.skip`` at the module level
297-
is not allowed since pytest 3.0. If you are upgrading
298-
and ``pytest.skip`` was being used at the module level, you can set a
299-
``pytestmark`` variable:
300-
301-
.. code-block:: python
302-
303-
# before pytest 3.0
304-
pytest.skip('skipping all tests because of reasons')
305-
# after pytest 3.0
306-
pytestmark = pytest.mark.skip('skipping all tests because of reasons')
307-
308-
``pytestmark`` applies a mark or list of marks to all tests in a module.
309-
310-
311-
Skipping on a missing import dependency
312-
--------------------------------------------------
313-
314-
You can use the following import helper at module level
315-
or within a test or test setup function::
316-
317-
docutils = pytest.importorskip("docutils")
318-
319-
If ``docutils`` cannot be imported here, this will lead to a
320-
skip outcome of the test. You can also skip based on the
321-
version number of a library::
322-
323-
docutils = pytest.importorskip("docutils", minversion="0.3")
324-
325-
The version will be read from the specified
326-
module's ``__version__`` attribute.
327-
328-
329352
.. _string conditions:
330353

331-
specifying conditions as strings versus booleans
332-
----------------------------------------------------------
354+
Conditions as strings instead of booleans
355+
-----------------------------------------
333356

334357
Prior to pytest-2.4 the only way to specify skipif/xfail conditions was
335358
to use strings::
@@ -346,7 +369,7 @@ all the module globals, and ``os`` and ``sys`` as a minimum.
346369
Since pytest-2.4 `condition booleans`_ are considered preferable
347370
because markers can then be freely imported between test modules.
348371
With strings you need to import not only the marker but all variables
349-
everything used by the marker, which violates encapsulation.
372+
used by the marker, which violates encapsulation.
350373

351374
The reason for specifying the condition as a string was that ``pytest`` can
352375
report a summary of skip conditions based purely on the condition string.
@@ -387,25 +410,3 @@ The equivalent with "boolean conditions" is::
387410
``config.getvalue()`` will not execute correctly.
388411

389412

390-
Summary
391-
-------
392-
393-
Here's a quick guide on how to skip tests in a module in different situations:
394-
395-
1. Skip all tests in a module unconditionally:
396-
397-
.. code-block:: python
398-
399-
pytestmark = pytest.mark.skip('all tests still WIP')
400-
401-
2. Skip all tests in a module based on some condition:
402-
403-
.. code-block:: python
404-
405-
pytestmark = pytest.mark.skipif(sys.platform == 'win32', 'tests for linux only')
406-
407-
3. Skip all tests in a module if some import is missing:
408-
409-
.. code-block:: python
410-
411-
pexpect = pytest.importorskip('pexpect')

0 commit comments

Comments
 (0)