Skip to content

Raise exception if parametrize collects an empty parameter set #3970

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Sep 18, 2018

Conversation

sambarluc
Copy link
Contributor

Currently, parametrize skips or xfails a test if no arguments are collected. In both cases, the test is considered successful in most (all?) testing systems. This can potentially hide bugs in the argument collection of parametrize, and thus lead to tests not being performed by mistake. The PR adds an ini option that causes a specific exception to be raised when parametrize collects no arguments.

Implements feature request #3849

@coveralls
Copy link

coveralls commented Sep 12, 2018

Coverage Status

Coverage increased (+0.03%) to 93.917% when pulling 913c07e on sambarluc:raise_on_empty_parameterset into bceaede on pytest-dev:features.

@sambarluc
Copy link
Contributor Author

Not exactly sure of what the standard workflow is here. Should I squash the commits? If so, when? Or is that done by the one who decides to merge?

@RonnyPfannschmidt
Copy link
Member

if you like please squash until the sequence is to your liking, personally i prefer to merge full expressive history lines after doing a bit manual cleanup

@sambarluc sambarluc force-pushed the raise_on_empty_parameterset branch from d7a6a5a to f13938e Compare September 12, 2018 15:14
@sambarluc
Copy link
Contributor Author

Ok, done. Unless other issues pop up, it's in your hands!

@codecov
Copy link

codecov bot commented Sep 12, 2018

Codecov Report

Merging #3970 into features will decrease coverage by 0.21%.
The diff coverage is 81.25%.

Impacted file tree graph

@@             Coverage Diff              @@
##           features    #3970      +/-   ##
============================================
- Coverage     94.48%   94.26%   -0.22%     
============================================
  Files           107      107              
  Lines         23651    23661      +10     
  Branches       2349     2347       -2     
============================================
- Hits          22347    22305      -42     
- Misses          993     1048      +55     
+ Partials        311      308       -3
Flag Coverage Δ
#doctesting 28.51% <6.25%> (+0.05%) ⬆️
#linux 94.16% <81.25%> (-0.2%) ⬇️
#nobyte 0% <0%> (ø) ⬆️
#numpy 28.14% <6.25%> (-0.01%) ⬇️
#pexpect 0% <0%> (ø) ⬆️
#pluggymaster 92.06% <81.25%> (-1.54%) ⬇️
#py27 92.09% <81.25%> (-0.51%) ⬇️
#py34 92.17% <81.25%> (+0.03%) ⬆️
#py35 ?
#py36 92.61% <81.25%> (-0.11%) ⬇️
#py37 ?
#trial 31.04% <6.25%> (-0.18%) ⬇️
#windows 92.83% <81.25%> (-0.99%) ⬇️
#xdist 18.41% <6.25%> (-0.06%) ⬇️
Impacted Files Coverage Δ
src/_pytest/mark/__init__.py 97.29% <100%> (ø) ⬆️
src/_pytest/mark/structures.py 94.06% <75%> (+0.11%) ⬆️
testing/test_mark.py 96.23% <81.81%> (-0.36%) ⬇️
src/_pytest/debugging.py 55.81% <0%> (-11.63%) ⬇️
testing/test_pdb.py 44.01% <0%> (-9.48%) ⬇️
src/_pytest/doctest.py 94.37% <0%> (-2.41%) ⬇️
src/_pytest/capture.py 88.55% <0%> (-1.38%) ⬇️
src/_pytest/assertion/rewrite.py 95.52% <0%> (-0.17%) ⬇️
testing/code/test_source.py 96.79% <0%> (-0.01%) ⬇️
testing/code/test_code.py 95.34% <0%> (+1.36%) ⬆️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d12f46c...f13938e. Read the comment docs.

@RonnyPfannschmidt
Copy link
Member

great job, thanks for the patience and for nicely implementing the proposal

Copy link
Member

@nicoddemus nicoddemus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @sambarluc for the great implementation.

I left some comments which I think we should discuss a bit. 👍

@@ -976,6 +976,7 @@ passed multiple times. The expected format is ``name=value``. For example::

* ``skip`` skips tests with an empty parameterset (default)
* ``xfail`` marks tests with an empty parameterset as xfail(run=False)
* ``fail_at_parametrize`` raises an exception if parametrize collects an empty parameter set
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering, should we change the default for 4.0 to this option?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you ask me, I would definitely pick fail_at_parametrize as the default, that is by far the most obvious behaviour we expect in all our tests. However, I don't have a broader overview of how pytest is used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree, it should error out by default.

Hmmm one way to implement this transition with minimal disturbance to our users: if we find an empty parameter, we issue a warning saying that the default will change in a few versions, and users that like the current default can set it into their pytest.ini files. What do you think @RonnyPfannschmidt?

But sorry, just realized this is not related to this PR and I'm hijacking this thread. 😁

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i created a follow-up issue

@sambarluc
Copy link
Contributor Author

Not sure of how badly github deals with changes in history, so I leave the squash! and fixup! commits until we converge.

@nicoddemus
Copy link
Member

Thanks @sambarluc for the contribution, much appreciated! 👍

@nicoddemus
Copy link
Member

Feel free to rebase/squash your commits now. 👍

@sambarluc
Copy link
Contributor Author

sambarluc commented Sep 14, 2018 via email

@RonnyPfannschmidt
Copy link
Member

nice idea - in that case i would like to propose the name fail_instead_at_collect and i would like to get the input of @nicoddemus as well

@sambarluc
Copy link
Contributor Author

My 5 cents: I would not go for fail_instead_at_collect, as it implies that the other options do something else, which is true now but maybe not in the future.

@RonnyPfannschmidt
Copy link
Member

hmm, that brings me to the point that that particular option wasn't intended to manage it to begin with - lets go with the value you proposed and open a follow-up ticket to think it trough a bit later

@sambarluc
Copy link
Contributor Author

I'll wait for @nicoddemus before squashing, I guess it's not particularly urgent.

@sambarluc sambarluc force-pushed the raise_on_empty_parameterset branch from a2bed03 to b84e5c8 Compare September 14, 2018 12:11
@nicoddemus
Copy link
Member

nicoddemus commented Sep 14, 2018

Hi folks,

I think fail_instead_at_collect and fail_at_collect mean the same thing actually. Do you see a difference in semantics @RonnyPfannschmidt? What "fail_instead" implies something else is possible to do at collection, what's the "instead of" you are thinking?

I'll wait for @nicoddemus before squashing, I guess it's not particularly urgent.

You are right, this will go into 3.9, which is weeks away. 😁

Optionally raise an exception when parametrize collects no arguments.
Provide the name of the test causing the failure in the exception
message.

See: pytest-dev#3849
@sambarluc sambarluc force-pushed the raise_on_empty_parameterset branch from b84e5c8 to 913c07e Compare September 15, 2018 07:18
@sambarluc
Copy link
Contributor Author

Not sure what's wrong with appveyor. How can I retrigger?

@RonnyPfannschmidt
Copy link
Member

its a codecov failure
@blueyed - what do we do about the regular failures of that service?

@sambarluc
Copy link
Contributor Author

('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))

Doesn't seem to be related to the code itself.

@RonnyPfannschmidt
Copy link
Member

exactly, as far as im concerned, the tests passed, the codecov failed, and that shouldnt reflect in the build status

@nicoddemus feel free to merge
@sambarluc thanks for the continued work and setting it all up nicely over time 👍

@sambarluc
Copy link
Contributor Author

Thank you all for the great tool you provide with pytest!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants