5
5
Skip and xfail: dealing with tests that cannot succeed
6
6
=====================================================================
7
7
8
- If you have test functions that cannot be run on certain platforms
9
- or that you expect to fail you can mark them accordingly or you
10
- may call helper functions during execution of setup or test functions .
8
+ You can mark test functions that cannot be run on certain platforms
9
+ or that you expect to fail so pytest can deal with them accordingly and
10
+ present a summary of the test session, while keeping the test suite * green * .
11
11
12
- A *skip * means that you expect your test to pass unless the environment
13
- (e.g. wrong Python interpreter, missing dependency) prevents it to run.
14
- And *xfail * means that your test can run but you expect it to fail
15
- because there is an implementation problem.
12
+ A **skip ** means that you expect your test to pass only if some conditions are met,
13
+ otherwise pytest should skip running the test altogether. Common examples are skipping
14
+ windows-only tests on non-windows platforms, or skipping tests that depend on an external
15
+ resource which is not available at the moment (for example a database).
16
+
17
+ A **xfail ** means that you expect a test to fail for some reason.
18
+ A common example is a test for a feature not yet implemented, or a bug not yet fixed.
16
19
17
20
``pytest `` counts and lists *skip * and *xfail * tests separately. Detailed
18
21
information about skipped/xfailed tests is not shown by default to avoid
@@ -26,8 +29,8 @@ corresponding to the "short" letters shown in the test progress::
26
29
.. _skipif :
27
30
.. _`condition booleans` :
28
31
29
- Marking a test function to be skipped
30
- -------------------------------------------
32
+ Skipping test functions
33
+ -----------------------
31
34
32
35
.. versionadded :: 2.9
33
36
@@ -40,10 +43,23 @@ which may be passed an optional ``reason``:
40
43
def test_the_unknown ():
41
44
...
42
45
46
+
47
+ Alternatively, it is also possible to skip imperatively during test execution or setup
48
+ by calling the ``pytest.skip(reason) `` function:
49
+
50
+ .. code-block :: python
51
+
52
+ def test_function ():
53
+ if not valid_config():
54
+ pytest.skip(" unsupported configuration" )
55
+
56
+ The imperative method is useful when it is not possible to evaluate the skip condition
57
+ during import time.
58
+
43
59
``skipif ``
44
60
~~~~~~~~~~
45
61
46
- .. versionadded :: 2.0, 2.4
62
+ .. versionadded :: 2.0
47
63
48
64
If you wish to skip something conditionally then you can use ``skipif `` instead.
49
65
Here is an example of marking a test function to be skipped
@@ -55,24 +71,20 @@ when run on a Python3.3 interpreter::
55
71
def test_function():
56
72
...
57
73
58
- During test function setup the condition ("sys.version_info >= (3,3)") is
59
- checked. If it evaluates to True, the test function will be skipped
60
- with the specified reason. Note that pytest enforces specifying a reason
61
- in order to report meaningful "skip reasons" (e.g. when using ``-rs ``).
62
- If the condition is a string, it will be evaluated as python expression.
74
+ If the condition evaluates to ``True `` during collection, the test function will be skipped,
75
+ with the specified reason appearing in the summary when using ``-rs ``.
63
76
64
- You can share skipif markers between modules. Consider this test module::
77
+ You can share `` skipif `` markers between modules. Consider this test module::
65
78
66
79
# content of test_mymodule.py
67
-
68
80
import mymodule
69
81
minversion = pytest.mark.skipif(mymodule.__versioninfo__ < (1,1),
70
82
reason="at least mymodule-1.1 required")
71
83
@minversion
72
84
def test_function():
73
85
...
74
86
75
- You can import it from another test module::
87
+ You can import the marker and reuse it in another test module::
76
88
77
89
# test_myothermodule.py
78
90
from test_mymodule import minversion
@@ -85,16 +97,15 @@ For larger test suites it's usually a good idea to have one file
85
97
where you define the markers which you then consistently apply
86
98
throughout your test suite.
87
99
88
- Alternatively, the pre pytest-2.4 way to specify :ref: `condition strings
89
- <string conditions>` instead of booleans will remain fully supported in future
90
- versions of pytest. It couldn't be easily used for importing markers
91
- between test modules so it's no longer advertised as the primary method.
100
+ Alternatively, you can use :ref: `condition strings
101
+ <string conditions>` instead of booleans, but they can't be shared between modules easily
102
+ so they are supported mainly for backward compatibility reasons.
92
103
93
104
94
105
Skip all test functions of a class or module
95
- ---------------------------------------------
106
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
96
107
97
- You can use the ``skipif `` decorator (and any other marker) on classes::
108
+ You can use the ``skipif `` marker (as any other marker) on classes::
98
109
99
110
@pytest.mark.skipif(sys.platform == 'win32',
100
111
reason="does not run on windows")
@@ -103,26 +114,68 @@ You can use the ``skipif`` decorator (and any other marker) on classes::
103
114
def test_function(self):
104
115
"will not be setup or run under 'win32' platform"
105
116
106
- If the condition is true , this marker will produce a skip result for
107
- each of the test methods.
117
+ If the condition is `` True `` , this marker will produce a skip result for
118
+ each of the test methods of that class .
108
119
109
- If you want to skip all test functions of a module, you must use
120
+ If you want to skip all test functions of a module, you may use
110
121
the ``pytestmark `` name on the global level:
111
122
112
123
.. code-block :: python
113
124
114
125
# test_module.py
115
126
pytestmark = pytest.mark.skipif(... )
116
127
117
- If multiple " skipif" decorators are applied to a test function, it
128
+ If multiple `` skipif `` decorators are applied to a test function, it
118
129
will be skipped if any of the skip conditions is true.
119
130
120
131
.. _`whole class- or module level` : mark.html#scoped-marking
121
132
133
+
134
+ Skipping on a missing import dependency
135
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
136
+
137
+ You can use the following helper at module level
138
+ or within a test or test setup function::
139
+
140
+ docutils = pytest.importorskip("docutils")
141
+
142
+ If ``docutils `` cannot be imported here, this will lead to a
143
+ skip outcome of the test. You can also skip based on the
144
+ version number of a library::
145
+
146
+ docutils = pytest.importorskip("docutils", minversion="0.3")
147
+
148
+ The version will be read from the specified
149
+ module's ``__version__ `` attribute.
150
+
151
+ Summary
152
+ ~~~~~~~
153
+
154
+ Here's a quick guide on how to skip tests in a module in different situations:
155
+
156
+ 1. Skip all tests in a module unconditionally:
157
+
158
+ .. code-block :: python
159
+
160
+ pytestmark = pytest.mark.skip(' all tests still WIP' )
161
+
162
+ 2. Skip all tests in a module based on some condition:
163
+
164
+ .. code-block :: python
165
+
166
+ pytestmark = pytest.mark.skipif(sys.platform == ' win32' , ' tests for linux only' )
167
+
168
+ 3. Skip all tests in a module if some import is missing:
169
+
170
+ .. code-block :: python
171
+
172
+ pexpect = pytest.importorskip(' pexpect' )
173
+
174
+
122
175
.. _xfail :
123
176
124
- Mark a test function as expected to fail
125
- -------------------------------------------------------
177
+ XFail: mark test functions as expected to fail
178
+ ----------------------------------------------
126
179
127
180
You can use the ``xfail `` marker to indicate that you
128
181
expect a test to fail::
@@ -135,6 +188,29 @@ This test will be run but no traceback will be reported
135
188
when it fails. Instead terminal reporting will list it in the
136
189
"expected to fail" (``XFAIL ``) or "unexpectedly passing" (``XPASS ``) sections.
137
190
191
+ Alternatively, you can also mark a test as ``XFAIL `` from within a test or setup function
192
+ imperatively:
193
+
194
+ .. code-block :: python
195
+
196
+ def test_function ():
197
+ if not valid_config():
198
+ pytest.xfail(" failing configuration (but should work)" )
199
+
200
+ This will unconditionally make ``test_function `` ``XFAIL ``. Note that no other code is executed
201
+ after ``pytest.xfail `` call, differently from the marker. That's because it is implemented
202
+ internally by raising a known exception.
203
+
204
+ Here's the signature of the ``xfail `` **marker ** (not the function), using Python 3 keyword-only
205
+ arguments syntax:
206
+
207
+ .. code-block :: python
208
+
209
+ def xfail (condition = None , * , reason = None , raises = None , run = True , strict = False ):
210
+
211
+
212
+
213
+
138
214
``strict `` parameter
139
215
~~~~~~~~~~~~~~~~~~~~
140
216
@@ -200,18 +276,19 @@ even executed, use the ``run`` parameter as ``False``:
200
276
def test_function ():
201
277
...
202
278
203
- This is specially useful for marking crashing tests for later inspection.
279
+ This is specially useful for xfailing tests that are crashing the interpreter and should be
280
+ investigated later.
204
281
205
282
206
- Ignoring xfail marks
207
- ~~~~~~~~~~~~~~~~~~~~
283
+ Ignoring xfail
284
+ ~~~~~~~~~~~~~~
208
285
209
286
By specifying on the commandline::
210
287
211
288
pytest --runxfail
212
289
213
290
you can force the running and reporting of an ``xfail `` marked test
214
- as if it weren't marked at all.
291
+ as if it weren't marked at all. This also causes `` pytest.xfail `` to produce no effect.
215
292
216
293
Examples
217
294
~~~~~~~~
@@ -245,16 +322,6 @@ Running it with the report-on-xfail option gives this output::
245
322
246
323
======= 7 xfailed in 0.12 seconds ========
247
324
248
- xfail signature summary
249
- ~~~~~~~~~~~~~~~~~~~~~~~
250
-
251
- Here's the signature of the ``xfail `` marker, using Python 3 keyword-only
252
- arguments syntax:
253
-
254
- .. code-block :: python
255
-
256
- def xfail (condition = None , * , reason = None , raises = None , run = True , strict = False ):
257
-
258
325
259
326
260
327
.. _`skip/xfail with parametrize` :
@@ -263,73 +330,29 @@ Skip/xfail with parametrize
263
330
---------------------------
264
331
265
332
It is possible to apply markers like skip and xfail to individual
266
- test instances when using parametrize::
333
+ test instances when using parametrize:
334
+
335
+ .. code-block :: python
267
336
268
337
import pytest
269
338
270
339
@pytest.mark.parametrize ((" n" , " expected" ), [
271
340
(1 , 2 ),
272
- pytest.mark.xfail(( 1, 0) ),
273
- pytest.mark.xfail(reason="some bug")((1, 3 )),
341
+ pytest.param( 1 , 0 , marks = pytest.mark.xfail ),
342
+ pytest.param( 1 , 3 , marks = pytest. mark.xfail(reason = " some bug" )),
274
343
(2 , 3 ),
275
344
(3 , 4 ),
276
345
(4 , 5 ),
277
- pytest.mark.skipif(" sys.version_info >= (3,0)")((10, 11 )),
346
+ pytest.param( 10 , 11 , marks = pytest.mark.skipif(sys.version_info >= (3 , 0 ), reason = " py2k " )),
278
347
])
279
348
def test_increment (n , expected ):
280
349
assert n + 1 == expected
281
350
282
351
283
- Imperative xfail from within a test or setup function
284
- ------------------------------------------------------
285
-
286
- If you cannot declare xfail- of skipif conditions at import
287
- time you can also imperatively produce an according outcome
288
- imperatively, in test or setup code::
289
-
290
- def test_function():
291
- if not valid_config():
292
- pytest.xfail("failing configuration (but should work)")
293
- # or
294
- pytest.skip("unsupported configuration")
295
-
296
- Note that calling ``pytest.skip `` at the module level
297
- is not allowed since pytest 3.0. If you are upgrading
298
- and ``pytest.skip `` was being used at the module level, you can set a
299
- ``pytestmark `` variable:
300
-
301
- .. code-block :: python
302
-
303
- # before pytest 3.0
304
- pytest.skip(' skipping all tests because of reasons' )
305
- # after pytest 3.0
306
- pytestmark = pytest.mark.skip(' skipping all tests because of reasons' )
307
-
308
- ``pytestmark `` applies a mark or list of marks to all tests in a module.
309
-
310
-
311
- Skipping on a missing import dependency
312
- --------------------------------------------------
313
-
314
- You can use the following import helper at module level
315
- or within a test or test setup function::
316
-
317
- docutils = pytest.importorskip("docutils")
318
-
319
- If ``docutils `` cannot be imported here, this will lead to a
320
- skip outcome of the test. You can also skip based on the
321
- version number of a library::
322
-
323
- docutils = pytest.importorskip("docutils", minversion="0.3")
324
-
325
- The version will be read from the specified
326
- module's ``__version__ `` attribute.
327
-
328
-
329
352
.. _string conditions :
330
353
331
- specifying conditions as strings versus booleans
332
- ----------------------------------------------------------
354
+ Conditions as strings instead of booleans
355
+ -----------------------------------------
333
356
334
357
Prior to pytest-2.4 the only way to specify skipif/xfail conditions was
335
358
to use strings::
@@ -346,7 +369,7 @@ all the module globals, and ``os`` and ``sys`` as a minimum.
346
369
Since pytest-2.4 `condition booleans `_ are considered preferable
347
370
because markers can then be freely imported between test modules.
348
371
With strings you need to import not only the marker but all variables
349
- everything used by the marker, which violates encapsulation.
372
+ used by the marker, which violates encapsulation.
350
373
351
374
The reason for specifying the condition as a string was that ``pytest `` can
352
375
report a summary of skip conditions based purely on the condition string.
@@ -387,25 +410,3 @@ The equivalent with "boolean conditions" is::
387
410
``config.getvalue() `` will not execute correctly.
388
411
389
412
390
- Summary
391
- -------
392
-
393
- Here's a quick guide on how to skip tests in a module in different situations:
394
-
395
- 1. Skip all tests in a module unconditionally:
396
-
397
- .. code-block :: python
398
-
399
- pytestmark = pytest.mark.skip(' all tests still WIP' )
400
-
401
- 2. Skip all tests in a module based on some condition:
402
-
403
- .. code-block :: python
404
-
405
- pytestmark = pytest.mark.skipif(sys.platform == ' win32' , ' tests for linux only' )
406
-
407
- 3. Skip all tests in a module if some import is missing:
408
-
409
- .. code-block :: python
410
-
411
- pexpect = pytest.importorskip(' pexpect' )
0 commit comments