Skip to content

Commit 4265ab3

Browse files
authored
Merge pull request #5773 from asottile/release-5.1.1
Preparing release version 5.1.1
2 parents daff906 + b135f5a commit 4265ab3

26 files changed

+117
-85
lines changed

CHANGELOG.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,15 @@ with advance notice in the **Deprecations** section of releases.
1818
1919
.. towncrier release notes start
2020
21+
pytest 5.1.1 (2019-08-20)
22+
=========================
23+
24+
Bug Fixes
25+
---------
26+
27+
- `#5751 <https://github.com/pytest-dev/pytest/issues/5751>`_: Fixed ``TypeError`` when importing pytest on Python 3.5.0 and 3.5.1.
28+
29+
2130
pytest 5.1.0 (2019-08-15)
2231
=========================
2332

changelog/5751.bugfix.rst

Lines changed: 0 additions & 1 deletion
This file was deleted.

doc/en/announce/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ Release announcements
66
:maxdepth: 2
77

88

9+
release-5.1.1
910
release-5.1.0
1011
release-5.0.1
1112
release-5.0.0

doc/en/announce/release-5.1.1.rst

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
pytest-5.1.1
2+
=======================================
3+
4+
pytest 5.1.1 has just been released to PyPI.
5+
6+
This is a bug-fix release, being a drop-in replacement. To upgrade::
7+
8+
pip install --upgrade pytest
9+
10+
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
11+
12+
Thanks to all who contributed to this release, among them:
13+
14+
* Anthony Sottile
15+
* Bruno Oliveira
16+
* Daniel Hahler
17+
* Florian Bruhin
18+
* Hugo van Kemenade
19+
* Ran Benita
20+
* Ronny Pfannschmidt
21+
22+
23+
Happy testing,
24+
The pytest Development Team

doc/en/assert.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ you will see the return value of the function call:
4747
E + where 3 = f()
4848
4949
test_assert1.py:6: AssertionError
50-
============================ 1 failed in 0.05s =============================
50+
============================ 1 failed in 0.02s =============================
5151
5252
``pytest`` has support for showing the values of the most common subexpressions
5353
including calls, attributes, comparisons, and binary and unary
@@ -208,7 +208,7 @@ if you run this module:
208208
E Use -v to get the full diff
209209
210210
test_assert2.py:6: AssertionError
211-
============================ 1 failed in 0.05s =============================
211+
============================ 1 failed in 0.02s =============================
212212
213213
Special comparisons are done for a number of cases:
214214

@@ -279,7 +279,7 @@ the conftest file:
279279
E vals: 1 != 2
280280
281281
test_foocompare.py:12: AssertionError
282-
1 failed in 0.05s
282+
1 failed in 0.02s
283283
284284
.. _assert-details:
285285
.. _`assert introspection`:

doc/en/builtin.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
160160
in python < 3.6 this is a pathlib2.Path
161161
162162
163-
no tests ran in 0.01s
163+
no tests ran in 0.00s
164164
165165
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:
166166

doc/en/cache.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ If you run this for the first time you will see two failures:
7575
E Failed: bad luck
7676
7777
test_50.py:7: Failed
78-
2 failed, 48 passed in 0.16s
78+
2 failed, 48 passed in 0.08s
7979
8080
If you then run it with ``--lf``:
8181

@@ -114,7 +114,7 @@ If you then run it with ``--lf``:
114114
E Failed: bad luck
115115
116116
test_50.py:7: Failed
117-
===================== 2 failed, 48 deselected in 0.07s =====================
117+
===================== 2 failed, 48 deselected in 0.02s =====================
118118
119119
You have run only the two failing tests from the last run, while the 48 passing
120120
tests have not been run ("deselected").
@@ -158,7 +158,7 @@ of ``FF`` and dots):
158158
E Failed: bad luck
159159
160160
test_50.py:7: Failed
161-
======================= 2 failed, 48 passed in 0.15s =======================
161+
======================= 2 failed, 48 passed in 0.07s =======================
162162
163163
.. _`config.cache`:
164164

@@ -230,7 +230,7 @@ If you run this command for the first time, you can see the print statement:
230230
test_caching.py:20: AssertionError
231231
-------------------------- Captured stdout setup ---------------------------
232232
running expensive computation...
233-
1 failed in 0.05s
233+
1 failed in 0.02s
234234
235235
If you run it a second time, the value will be retrieved from
236236
the cache and nothing will be printed:
@@ -249,7 +249,7 @@ the cache and nothing will be printed:
249249
E assert 42 == 23
250250
251251
test_caching.py:20: AssertionError
252-
1 failed in 0.05s
252+
1 failed in 0.02s
253253
254254
See the :ref:`cache-api` for more details.
255255

@@ -300,7 +300,7 @@ filtering:
300300
example/value contains:
301301
42
302302
303-
========================== no tests ran in 0.01s ===========================
303+
========================== no tests ran in 0.00s ===========================
304304
305305
Clearing Cache content
306306
----------------------

doc/en/capture.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ of the failing function and hide the other one:
9191
test_module.py:12: AssertionError
9292
-------------------------- Captured stdout setup ---------------------------
9393
setting up <function test_func2 at 0xdeadbeef>
94-
======================= 1 failed, 1 passed in 0.05s ========================
94+
======================= 1 failed, 1 passed in 0.02s ========================
9595
9696
Accessing captured output from a test function
9797
---------------------------------------------------

doc/en/doctest.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ then you can just invoke ``pytest`` directly:
3636
3737
test_example.txt . [100%]
3838
39-
============================ 1 passed in 0.02s =============================
39+
============================ 1 passed in 0.01s =============================
4040
4141
By default, pytest will collect ``test*.txt`` files looking for doctest directives, but you
4242
can pass additional globs using the ``--doctest-glob`` option (multi-allowed).
@@ -66,7 +66,7 @@ and functions, including from test modules:
6666
mymodule.py . [ 50%]
6767
test_example.txt . [100%]
6868
69-
============================ 2 passed in 0.03s =============================
69+
============================ 2 passed in 0.01s =============================
7070
7171
You can make these changes permanent in your project by
7272
putting them into a pytest.ini file like this:

doc/en/example/markers.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ Or the inverse, running all tests except the webtest ones:
6969
test_server.py::test_another PASSED [ 66%]
7070
test_server.py::TestClass::test_method PASSED [100%]
7171
72-
===================== 3 passed, 1 deselected in 0.02s ======================
72+
===================== 3 passed, 1 deselected in 0.01s ======================
7373
7474
Selecting tests based on their node ID
7575
--------------------------------------
@@ -120,7 +120,7 @@ Or select multiple nodes:
120120
test_server.py::TestClass::test_method PASSED [ 50%]
121121
test_server.py::test_send_http PASSED [100%]
122122
123-
============================ 2 passed in 0.02s =============================
123+
============================ 2 passed in 0.01s =============================
124124
125125
.. _node-id:
126126

@@ -176,7 +176,7 @@ And you can also run all tests except the ones that match the keyword:
176176
test_server.py::test_another PASSED [ 66%]
177177
test_server.py::TestClass::test_method PASSED [100%]
178178
179-
===================== 3 passed, 1 deselected in 0.02s ======================
179+
===================== 3 passed, 1 deselected in 0.01s ======================
180180
181181
Or to select "http" and "quick" tests:
182182

@@ -192,7 +192,7 @@ Or to select "http" and "quick" tests:
192192
test_server.py::test_send_http PASSED [ 50%]
193193
test_server.py::test_something_quick PASSED [100%]
194194
195-
===================== 2 passed, 2 deselected in 0.02s ======================
195+
===================== 2 passed, 2 deselected in 0.01s ======================
196196
197197
.. note::
198198

@@ -413,7 +413,7 @@ the test needs:
413413
414414
test_someenv.py s [100%]
415415
416-
============================ 1 skipped in 0.01s ============================
416+
============================ 1 skipped in 0.00s ============================
417417
418418
and here is one that specifies exactly the environment needed:
419419

@@ -499,7 +499,7 @@ The output is as follows:
499499
$ pytest -q -s
500500
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
501501
.
502-
1 passed in 0.01s
502+
1 passed in 0.00s
503503
504504
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
505505

@@ -623,7 +623,7 @@ then you will see two tests skipped and two executed tests as expected:
623623
624624
========================= short test summary info ==========================
625625
SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
626-
======================= 2 passed, 2 skipped in 0.02s =======================
626+
======================= 2 passed, 2 skipped in 0.01s =======================
627627
628628
Note that if you specify a platform via the marker-command line option like this:
629629

@@ -711,7 +711,7 @@ We can now use the ``-m option`` to select one set:
711711
test_module.py:8: in test_interface_complex
712712
assert 0
713713
E assert 0
714-
===================== 2 failed, 2 deselected in 0.07s ======================
714+
===================== 2 failed, 2 deselected in 0.02s ======================
715715
716716
or to select both "event" and "interface" tests:
717717

@@ -739,4 +739,4 @@ or to select both "event" and "interface" tests:
739739
test_module.py:12: in test_event_simple
740740
assert 0
741741
E assert 0
742-
===================== 3 failed, 1 deselected in 0.07s ======================
742+
===================== 3 failed, 1 deselected in 0.03s ======================

doc/en/example/nonpython.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ now execute the test specification:
4141
usecase execution failed
4242
spec failed: 'some': 'other'
4343
no further details known at this point.
44-
======================= 1 failed, 1 passed in 0.06s ========================
44+
======================= 1 failed, 1 passed in 0.02s ========================
4545
4646
.. regendoc:wipe
4747
@@ -77,7 +77,7 @@ consulted when reporting in ``verbose`` mode:
7777
usecase execution failed
7878
spec failed: 'some': 'other'
7979
no further details known at this point.
80-
======================= 1 failed, 1 passed in 0.07s ========================
80+
======================= 1 failed, 1 passed in 0.02s ========================
8181
8282
.. regendoc:wipe
8383
@@ -97,4 +97,4 @@ interesting to just look at the collection tree:
9797
<YamlItem hello>
9898
<YamlItem ok>
9999
100-
========================== no tests ran in 0.05s ===========================
100+
========================== no tests ran in 0.02s ===========================

doc/en/example/parametrize.rst

Lines changed: 13 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ let's run the full monty:
7373
E assert 4 < 4
7474
7575
test_compute.py:4: AssertionError
76-
1 failed, 4 passed in 0.06s
76+
1 failed, 4 passed in 0.02s
7777
7878
As expected when running the full range of ``param1`` values
7979
we'll get an error on the last one.
@@ -172,7 +172,7 @@ objects, they are still using the default pytest representation:
172172
<Function test_timedistance_v3[forward]>
173173
<Function test_timedistance_v3[backward]>
174174
175-
========================== no tests ran in 0.02s ===========================
175+
========================== no tests ran in 0.01s ===========================
176176
177177
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
178178
together with the actual data, instead of listing them separately.
@@ -229,7 +229,7 @@ this is a fully self-contained example which you can run with:
229229
230230
test_scenarios.py .... [100%]
231231
232-
============================ 4 passed in 0.02s =============================
232+
============================ 4 passed in 0.01s =============================
233233
234234
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
235235

@@ -248,7 +248,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
248248
<Function test_demo1[advanced]>
249249
<Function test_demo2[advanced]>
250250
251-
========================== no tests ran in 0.02s ===========================
251+
========================== no tests ran in 0.01s ===========================
252252
253253
Note that we told ``metafunc.parametrize()`` that your scenario values
254254
should be considered class-scoped. With pytest-2.3 this leads to a
@@ -323,7 +323,7 @@ Let's first see how it looks like at collection time:
323323
<Function test_db_initialized[d1]>
324324
<Function test_db_initialized[d2]>
325325
326-
========================== no tests ran in 0.01s ===========================
326+
========================== no tests ran in 0.00s ===========================
327327
328328
And then when we run the test:
329329

@@ -343,7 +343,7 @@ And then when we run the test:
343343
E Failed: deliberately failing for demo purposes
344344
345345
test_backends.py:8: Failed
346-
1 failed, 1 passed in 0.05s
346+
1 failed, 1 passed in 0.02s
347347
348348
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
349349

@@ -394,7 +394,7 @@ The result of this test will be successful:
394394
<Module test_indirect_list.py>
395395
<Function test_indirect[a-b]>
396396
397-
========================== no tests ran in 0.01s ===========================
397+
========================== no tests ran in 0.00s ===========================
398398
399399
.. regendoc:wipe
400400
@@ -454,7 +454,7 @@ argument sets to use for each test function. Let's run it:
454454
E assert 1 == 2
455455
456456
test_parametrize.py:21: AssertionError
457-
1 failed, 2 passed in 0.07s
457+
1 failed, 2 passed in 0.03s
458458
459459
Indirect parametrization with multiple fixtures
460460
--------------------------------------------------------------
@@ -475,11 +475,10 @@ Running it results in some skips if we don't have all the python interpreters in
475475
.. code-block:: pytest
476476
477477
. $ pytest -rs -q multipython.py
478-
ssssssssssss...ssssssssssss [100%]
478+
ssssssssssss......sss...... [100%]
479479
========================= short test summary info ==========================
480-
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
481-
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.7' not found
482-
3 passed, 24 skipped in 0.43s
480+
SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
481+
12 passed, 15 skipped in 0.62s
483482
484483
Indirect parametrization of optional implementations/imports
485484
--------------------------------------------------------------------
@@ -548,7 +547,7 @@ If you run this with reporting for skips enabled:
548547
549548
========================= short test summary info ==========================
550549
SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:13: could not import 'opt2': No module named 'opt2'
551-
======================= 1 passed, 1 skipped in 0.02s =======================
550+
======================= 1 passed, 1 skipped in 0.01s =======================
552551
553552
You'll see that we don't have an ``opt2`` module and thus the second test run
554553
of our ``test_func1`` was skipped. A few notes:
@@ -610,7 +609,7 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
610609
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
611610
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]
612611
613-
=============== 2 passed, 15 deselected, 1 xfailed in 0.23s ================
612+
=============== 2 passed, 15 deselected, 1 xfailed in 0.08s ================
614613
615614
As the result:
616615

doc/en/example/pythoncollection.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,7 @@ You can always peek at the collection tree without running tests like this:
221221
<Function test_method>
222222
<Function test_anothermethod>
223223
224-
========================== no tests ran in 0.01s ===========================
224+
========================== no tests ran in 0.00s ===========================
225225
226226
.. _customizing-test-collection:
227227

@@ -297,7 +297,7 @@ file will be left out:
297297
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
298298
collected 0 items
299299
300-
========================== no tests ran in 0.04s ===========================
300+
========================== no tests ran in 0.01s ===========================
301301
302302
It's also possible to ignore files based on Unix shell-style wildcards by adding
303303
patterns to ``collect_ignore_glob``.

doc/en/example/reportingdemo.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -650,4 +650,4 @@ Here is a nice run of several failures and how ``pytest`` presents things:
650650
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
651651
652652
failure_demo.py:282: AssertionError
653-
============================ 44 failed in 0.82s ============================
653+
============================ 44 failed in 0.26s ============================

0 commit comments

Comments
 (0)