Skip to content

Skip report refers to skipping plugin when using skipif marker #114

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
pytestbot opened this issue Jan 26, 2012 · 6 comments
Closed

Skip report refers to skipping plugin when using skipif marker #114

pytestbot opened this issue Jan 26, 2012 · 6 comments
Labels
type: enhancement new feature or API change, should be merged into features branch

Comments

@pytestbot
Copy link
Contributor

Originally reported by: Floris Bruynooghe (BitBucket: flub, GitHub: flub)


When using pytest.mark.skipif and asking for skip reports the skipping plugin gets referred, e.g.:

import pytest

@pytest.mark.skipif('True', reason='reason1')
def test():
    pass

def test2():
    pytest.skip('reason2')
$ python /home/flub/Projects/py/pytest/pytest.py -rs test.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.2 -- pytest-2.2.2.dev6
collected 2 items 

test.py ss
=========================== short test summary info ============================
SKIP [1] /home/flub/Projects/py/pytest/_pytest/skipping.py:118: reason1
SKIP [1] /tmp/sandbox/test.py:10: reason2

========================== 2 skipped in 0.01 seconds ===========================

The first skip description is not very useful, the second is much nicer.


@pytestbot
Copy link
Contributor Author

Original comment by Ronny Pfannschmidt (BitBucket: RonnyPfannschmidt, GitHub: RonnyPfannschmidt):


would it be reasonable to have output like:
{{{
$ python /home/flub/Projects/py/pytest/pytest.py -rs test.py
=================== test session starts ====================
platform linux2 -- Python 2.7.2 -- pytest-2.2.2.dev6
collected 2 items

test.py ss
================= short test summary info ==================
SKIP [1] : reason1
SKIP [1] /tmp/sandbox/test.py:10: reason2

================ 2 skipped in 0.01 seconds =================
}}}

im not sure how to name the location best if it happens inside of a plugin

@pytestbot
Copy link
Contributor Author

Original comment by Floris Bruynooghe (BitBucket: flub, GitHub: flub):


I don't think I've explained my use case very well. I stumbled upon this while I was looking at the output of a large test run on a Jenkins server. There it was noticeable that the tests skipped by pytest.skip() pointed me directly to the correct module while for the skipif marker I had to grep the entire code base for the reason message in order to find where it was being skipped. Yes this was aggravated by a bad skipping message of "temporarily disabled", but I still think it would be more helpful if it pointed me to the right place.

So in summary I still think it would be useful if the skipping summary could point to the skipped tests rather then the skipping plugin. Just changing the text doesn't help my use case really.

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


There should be a report directly pointing to the test function i guess. If no reason was given the skipif eval expression should be shown as it is then the reason for the skip.

@pytestbot
Copy link
Contributor Author

Original comment by Ionel Cristian Mărieș (BitBucket: ionelmc, GitHub: ionelmc):


Issue #760 was marked as a duplicate of this issue.

@pytestbot
Copy link
Contributor Author

Original comment by Ionel Cristian Mărieș (BitBucket: ionelmc, GitHub: ionelmc):


Can we make this more important?

I'd try to fix this but I'm a bit lost about how this is best solved. Any suggestions?

@pytestbot
Copy link
Contributor Author

Original comment by Ionel Cristian Mărieș (BitBucket: ionelmc, GitHub: ionelmc):


Meanwhile here's a half-baked attempt to fix it with a custom hook:

@pytest.mark.tryfirst
def pytest_runtest_setup(item):
    from _pytest.runner import Skipped
    from _pytest.skipping import MarkEvaluator, check_xfail_no_run

    evalskip = MarkEvaluator(item, 'skipif')
    if evalskip.istrue():
        s_code = item.obj.func_code
        f_code = pytest_runtest_setup.func_code
        Skipped  # lint-happy
        code = compile('\n' * s_code.co_firstlineno + 'raise Skipped(msg=evalskip.getexplanation())',
                       s_code.co_filename, 'exec')
        if six.PY3:
            code = CodeType(
                0, 0,
                f_code.co_nlocals, f_code.co_stacksize, f_code.co_flags,
                code.co_code, code.co_consts, code.co_names, code.co_varnames,
                s_code.co_filename, s_code.co_name,
                code.co_firstlineno, b"",
                (), ()
            )
        else:
            code = CodeType(
                0,
                f_code.co_nlocals, f_code.co_stacksize, f_code.co_flags,
                code.co_code, code.co_consts, code.co_names, code.co_varnames,
                s_code.co_filename.encode(), s_code.co_name.encode(),
                code.co_firstlineno, b"",
                (), ()
            )
        exec(code, locals())

    item._evalxfail = MarkEvaluator(item, 'xfail')
    check_xfail_no_run(item)

(it's scary that) it works!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement new feature or API change, should be merged into features branch
Projects
None yet
Development

No branches or pull requests

1 participant