Skip to content

5 test failures building pylint 2.6.0 on Python 3.9.1. #4068

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Apteryks opened this issue Feb 4, 2021 · 1 comment
Closed

5 test failures building pylint 2.6.0 on Python 3.9.1. #4068

Apteryks opened this issue Feb 4, 2021 · 1 comment
Labels
Duplicate 🐫 Duplicate of an already existing issue

Comments

@Apteryks
Copy link

Apteryks commented Feb 4, 2021

Steps to reproduce

This occurs when attempting to build pylint 2.6.0 as packaged in GNU Guix.

The direct dependencies versions of the pylint package are:

Using python 3.9.1.

Current behavior

The test suite (ran via python setup.py test) fails with:

============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
benchmark: 3.2.3 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/guix-build-python-pylint-2.6.0.drv-0/source, configfile: pytest.ini
plugins: benchmark-3.2.3
collected 1409 items / 203 deselected / 1206 selected

tests/test_config.py ....                                                [  0%]
tests/test_func.py ..................................................... [  4%]
..                                                                       [  4%]
tests/test_functional.py ...s..........................ss.F............. [  8%]
...s........s...............................................s........... [ 14%]
..s.....F.....s....s..s.................................F..............s [ 20%]
........................................F.s.......s..................... [ 26%]
................s............................s..................ss..ss.s [ 32%]
.s................s................................s.................... [ 38%]
..                                                                       [ 38%]
tests/test_import_graph.py ..                                            [ 38%]
tests/test_pragma_parser.py ............                                 [ 39%]
tests/test_pylint_runners.py ....                                        [ 40%]
tests/test_regr.py ..................                                    [ 41%]
tests/test_self.py ..................................................    [ 45%]
tests/unittest_config.py ......                                          [ 46%]
tests/unittest_pyreverse_diadefs.py .........                            [ 47%]
tests/unittest_pyreverse_inspector.py ........                           [ 47%]
tests/unittest_pyreverse_writer.py ......                                [ 48%]
tests/unittest_reporters_json.py .                                       [ 48%]
tests/unittest_reporting.py ....                                         [ 48%]
tests/benchmark/test_baseline_benchmarks.py ..........F                  [ 49%]
tests/checkers/unittest_base.py .............s..........                 [ 51%]
tests/checkers/unittest_classes.py .....                                 [ 52%]
tests/checkers/unittest_exceptions.py ..                                 [ 52%]
tests/checkers/unittest_format.py ..............                         [ 53%]
tests/checkers/unittest_imports.py ......                                [ 53%]
tests/checkers/unittest_logging.py ........                              [ 54%]
tests/checkers/unittest_misc.py ..........                               [ 55%]
tests/checkers/unittest_python3.py ..................................... [ 58%]
....................................................                     [ 62%]
tests/checkers/unittest_similar.py .........                             [ 63%]
tests/checkers/unittest_spelling.py sssssssssssssssss                    [ 64%]
tests/checkers/unittest_stdlib.py ......                                 [ 65%]
tests/checkers/unittest_strings.py ...                                   [ 65%]
tests/checkers/unittest_typecheck.py ........ss.........                 [ 67%]
tests/checkers/unittest_utils.py ..................................      [ 70%]
tests/checkers/unittest_variables.py ....................                [ 71%]
tests/extensions/test_bad_builtin.py .                                   [ 71%]
tests/extensions/test_broad_try_clause.py .                              [ 71%]
tests/extensions/test_check_docs.py .................................... [ 74%]
........................................................................ [ 80%]
......                                                                   [ 81%]
tests/extensions/test_check_docs_utils.py ..............                 [ 82%]
tests/extensions/test_check_mccabe.py ..                                 [ 82%]
tests/extensions/test_check_raise_docs.py .............................. [ 85%]
................                                                         [ 86%]
tests/extensions/test_check_return_docs.py ............................. [ 88%]
...........                                                              [ 89%]
tests/extensions/test_check_yields_docs.py ............................  [ 92%]
tests/extensions/test_comparetozero.py .                                 [ 92%]
tests/extensions/test_docstyle.py .                                      [ 92%]
tests/extensions/test_elseif_used.py .                                   [ 92%]
tests/extensions/test_emptystring.py .                                   [ 92%]
tests/extensions/test_overlapping_exceptions.py ..                       [ 92%]
tests/extensions/test_redefined.py .                                     [ 92%]
tests/functional/r/redundant_unittest_assert.py ss                       [ 92%]
tests/lint/unittest_lint.py ............................................ [ 96%]
.....                                                                    [ 96%]
tests/message/unittest_message.py .                                      [ 97%]
tests/message/unittest_message_definition.py ......                      [ 97%]
tests/message/unittest_message_definition_store.py .................     [ 98%]
tests/message/unittest_message_id_store.py .......                       [ 99%]
tests/utils/unittest_ast_walker.py ..                                    [ 99%]
tests/utils/unittest_utils.py ....                                       [100%]

=================================== FAILURES ===================================
____________________ test_functional[unused_typing_imports] ____________________

test_file = FunctionalTest:unused_typing_imports

    @pytest.mark.parametrize("test_file", TESTS, ids=TESTS_NAMES)
    def test_functional(test_file):
        LintTest = (
            LintModuleOutputUpdate(test_file)
            if UPDATE
            else testutils.LintModuleTest(test_file)
        )
        LintTest.setUp()
>       LintTest._runTest()

tests/test_functional.py:102: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pylint.testutils.LintModuleTest object at 0x7ffff0216610>

    def _runTest(self):
        modules_to_check = [self._test_file.source]
        self._linter.check(modules_to_check)
        expected_messages, expected_text = self._get_expected()
        received_messages, received_text = self._get_received()
    
        if expected_messages != received_messages:
            msg = ['Wrong results for file "%s":' % (self._test_file.base)]
            missing, unexpected = multiset_difference(
                expected_messages, received_messages
            )
            if missing:
                msg.append("\nExpected in testdata:")
                msg.extend(" %3d: %s" % msg for msg in sorted(missing))
            if unexpected:
                msg.append("\nUnexpected in testdata:")
                msg.extend(" %3d: %s" % msg for msg in sorted(unexpected))
>           pytest.fail("\n".join(msg))
E           Failed: Wrong results for file "unused_typing_imports":
E           
E           Unexpected in testdata:
E             27: unsubscriptable-object
E             31: unsubscriptable-object

pylint/testutils.py:610: Failed
______________ test_functional[star_needs_assignment_target_py35] ______________

test_file = FunctionalTest:star_needs_assignment_target_py35

    @pytest.mark.parametrize("test_file", TESTS, ids=TESTS_NAMES)
    def test_functional(test_file):
        LintTest = (
            LintModuleOutputUpdate(test_file)
            if UPDATE
            else testutils.LintModuleTest(test_file)
        )
        LintTest.setUp()
>       LintTest._runTest()

tests/test_functional.py:102: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pylint.testutils.LintModuleTest object at 0x7fffec5c8c40>

    def _runTest(self):
        modules_to_check = [self._test_file.source]
        self._linter.check(modules_to_check)
        expected_messages, expected_text = self._get_expected()
        received_messages, received_text = self._get_received()
    
        if expected_messages != received_messages:
            msg = ['Wrong results for file "%s":' % (self._test_file.base)]
            missing, unexpected = multiset_difference(
                expected_messages, received_messages
            )
            if missing:
                msg.append("\nExpected in testdata:")
                msg.extend(" %3d: %s" % msg for msg in sorted(missing))
            if unexpected:
                msg.append("\nUnexpected in testdata:")
                msg.extend(" %3d: %s" % msg for msg in sorted(unexpected))
>           pytest.fail("\n".join(msg))
E           Failed: Wrong results for file "star_needs_assignment_target_py35":
E           
E           Expected in testdata:
E             15: star-needs-assignment-target
E           
E           Unexpected in testdata:
E             15: syntax-error

pylint/testutils.py:610: Failed
_____________ test_functional[regression_property_no_member_2641] ______________

test_file = FunctionalTest:regression_property_no_member_2641

    @pytest.mark.parametrize("test_file", TESTS, ids=TESTS_NAMES)
    def test_functional(test_file):
        LintTest = (
            LintModuleOutputUpdate(test_file)
            if UPDATE
            else testutils.LintModuleTest(test_file)
        )
        LintTest.setUp()
>       LintTest._runTest()

tests/test_functional.py:102: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pylint.testutils.LintModuleTest object at 0x7fffeb5c69d0>

    def _runTest(self):
        modules_to_check = [self._test_file.source]
        self._linter.check(modules_to_check)
        expected_messages, expected_text = self._get_expected()
        received_messages, received_text = self._get_received()
    
        if expected_messages != received_messages:
            msg = ['Wrong results for file "%s":' % (self._test_file.base)]
            missing, unexpected = multiset_difference(
                expected_messages, received_messages
            )
            if missing:
                msg.append("\nExpected in testdata:")
                msg.extend(" %3d: %s" % msg for msg in sorted(missing))
            if unexpected:
                msg.append("\nUnexpected in testdata:")
                msg.extend(" %3d: %s" % msg for msg in sorted(unexpected))
>           pytest.fail("\n".join(msg))
E           Failed: Wrong results for file "regression_property_no_member_2641":
E           
E           Unexpected in testdata:
E             28: no-member

pylint/testutils.py:610: Failed
______________________ test_functional[missing_kwoa_py3] _______________________

test_file = FunctionalTest:missing_kwoa_py3

    @pytest.mark.parametrize("test_file", TESTS, ids=TESTS_NAMES)
    def test_functional(test_file):
        LintTest = (
            LintModuleOutputUpdate(test_file)
            if UPDATE
            else testutils.LintModuleTest(test_file)
        )
        LintTest.setUp()
>       LintTest._runTest()

tests/test_functional.py:102: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pylint.testutils.LintModuleTest object at 0x7fffeaf40670>

    def _runTest(self):
        modules_to_check = [self._test_file.source]
        self._linter.check(modules_to_check)
        expected_messages, expected_text = self._get_expected()
        received_messages, received_text = self._get_received()
    
        if expected_messages != received_messages:
            msg = ['Wrong results for file "%s":' % (self._test_file.base)]
            missing, unexpected = multiset_difference(
                expected_messages, received_messages
            )
            if missing:
                msg.append("\nExpected in testdata:")
                msg.extend(" %3d: %s" % msg for msg in sorted(missing))
            if unexpected:
                msg.append("\nUnexpected in testdata:")
                msg.extend(" %3d: %s" % msg for msg in sorted(unexpected))
>           pytest.fail("\n".join(msg))
E           Failed: Wrong results for file "missing_kwoa_py3":
E           
E           Unexpected in testdata:
E             59: unsubscriptable-object
E             60: unsubscriptable-object

pylint/testutils.py:610: Failed
_ TestEstablishBaselineBenchmarks.test_baseline_benchmark_j1_all_checks_lots_of_files _

self = <test_baseline_benchmarks.TestEstablishBaselineBenchmarks object at 0x7fffe61e0820>
benchmark = <pytest_benchmark.fixture.BenchmarkFixture object at 0x7fffe61e08e0>

    def test_baseline_benchmark_j1_all_checks_lots_of_files(self, benchmark):
        """ Runs lots of files, with -j1, against all plug-ins
    
        ... that's the intent at least.
        """
        if benchmark.disabled:
            benchmark(print, "skipping, only benchmark large file counts")
            return  # _only_ run this test is profiling
        linter = PyLinter()
    
        # Register all checkers/extensions and enable them
>       register_plugins(
            linter, os.path.abspath(os.path.join(os.path.dirname(__file__), "..", ".."))
        )

tests/benchmark/test_baseline_benchmarks.py:311: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
pylint/utils/utils.py:254: in register_plugins
    module = modutils.load_module_from_file(
/gnu/store/70cbp26h7vr47cxa7ppx39agh90x5jyx-python-astroid-2.4.2-1.5f67396/lib/python3.9/site-packages/astroid/modutils.py:240: in load_module_from_file
    return load_module_from_modpath(modpath)
/gnu/store/70cbp26h7vr47cxa7ppx39agh90x5jyx-python-astroid-2.4.2-1.5f67396/lib/python3.9/site-packages/astroid/modutils.py:225: in load_module_from_modpath
    return load_module_from_name(".".join(parts))
/gnu/store/70cbp26h7vr47cxa7ppx39agh90x5jyx-python-astroid-2.4.2-1.5f67396/lib/python3.9/site-packages/astroid/modutils.py:210: in load_module_from_name
    return importlib.import_module(dotted_name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

name = '.github', package = None

    def import_module(name, package=None):
        """Import a module.
    
        The 'package' argument is required when performing a relative import. It
        specifies the package to use as the anchor point from which to resolve the
        relative import to an absolute import.
    
        """
        level = 0
        if name.startswith('.'):
            if not package:
                msg = ("the 'package' argument is required to perform a relative "
                       "import for {!r}")
>               raise TypeError(msg.format(name))
E               TypeError: the 'package' argument is required to perform a relative import for '.github'

/gnu/store/ws8knmvr2zij49kdj5cxvyxbf4n0qxg4-python-3.9.1/lib/python3.9/importlib/__init__.py:122: TypeError
=============================== warnings summary ===============================
tests/test_func.py::test_functionality[func_excess_escapes.py]
  <unknown>:7: DeprecationWarning: invalid escape sequence \[

tests/test_func.py::test_functionality[func_excess_escapes.py]
  <unknown>:8: DeprecationWarning: invalid escape sequence \/

tests/test_func.py::test_functionality[func_excess_escapes.py]
  <unknown>:9: DeprecationWarning: invalid escape sequence \`

tests/test_func.py::test_functionality[func_excess_escapes.py]
  <unknown>:15: DeprecationWarning: invalid escape sequence \o

tests/test_func.py::test_functionality[func_excess_escapes.py]
  <unknown>:17: DeprecationWarning: invalid escape sequence \8

tests/test_func.py::test_functionality[func_excess_escapes.py]
  <unknown>:27: DeprecationWarning: invalid escape sequence \P

tests/test_functional.py::test_functional[future_unicode_literals]
tests/test_functional.py::test_functional[anomalous_unicode_escape_py3]
  <unknown>:5: DeprecationWarning: invalid escape sequence \u

tests/test_functional.py::test_functional[anomalous_unicode_escape_py3]
  <unknown>:6: DeprecationWarning: invalid escape sequence \U

tests/test_functional.py::test_functional[anomalous_unicode_escape_py3]
  <unknown>:8: DeprecationWarning: invalid escape sequence \N

tests/benchmark/test_baseline_benchmarks.py::TestEstablishBaselineBenchmarks::test_baseline_benchmark_j1_all_checks_lots_of_files
  tests/benchmark/test_baseline_benchmarks.py:300: PytestBenchmarkWarning: Benchmark fixture was not used at all in this test!
    def test_baseline_benchmark_j1_all_checks_lots_of_files(self, benchmark):

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------------------------------------------------------------------------------------------------------------- benchmark 'baseline': 10 tests ---------------------------------------------------------------------------------------------------------------
Name (time in us)                                                 Min                       Max                      Mean                 StdDev                    Median                    IQR            Outliers         OPS            Rounds  Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_baseline_benchmark_j1                                   376.7479 (1.0)            421.7289 (1.0)            381.1066 (1.0)           3.8089 (1.0)            379.8679 (1.0)           2.2082 (1.0)       229;243  2,623.9381 (1.0)        1953           1
test_baseline_benchmark_check_parallel_j10                32,866.8505 (87.24)       39,733.1063 (94.21)       35,813.9497 (93.97)     1,429.1637 (375.22)      35,608.2907 (93.74)     1,253.8284 (567.81)        5;4     27.9221 (0.01)         28           1
test_baseline_benchmark_j10                               34,798.8941 (92.37)       44,594.8113 (105.74)      40,598.5840 (106.53)    2,411.0821 (633.01)      40,755.4237 (107.29)    3,708.4706 (>1000.0)       8;0     24.6314 (0.01)         24           1
test_baseline_benchmark_j1_all_checks_single_file         37,932.7293 (100.68)      40,804.0993 (96.75)       38,383.2561 (100.72)      584.9630 (153.58)      38,133.8987 (100.39)      593.3084 (268.69)        1;1     26.0530 (0.01)         26           1
test_baseline_lots_of_files_j1                           170,926.6081 (453.69)     173,505.9284 (411.42)     171,598.3711 (450.26)      966.7632 (253.82)     171,218.1401 (450.73)      598.7585 (271.16)        1;1      5.8276 (0.00)          6           1
test_baseline_lots_of_files_j1_empty_checker             174,033.8970 (461.94)     177,494.6414 (420.87)     174,873.9521 (458.86)    1,299.6937 (341.22)     174,465.6693 (459.28)      377.9083 (171.14)        1;1      5.7184 (0.00)          6           1
test_baseline_lots_of_files_j10                          198,010.7352 (525.58)     243,943.3336 (578.44)     213,631.3409 (560.56)   17,019.4079 (>1000.0)    208,615.4893 (549.18)   18,576.9368 (>1000.0)       1;0      4.6810 (0.00)          6           1
test_baseline_lots_of_files_j10_empty_checker            203,430.2969 (539.96)     221,491.7149 (525.20)     211,569.6998 (555.15)    6,991.1732 (>1000.0)    210,843.0611 (555.04)   11,910.2094 (>1000.0)       2;0      4.7266 (0.00)          6           1
test_baseline_benchmark_j10_single_working_checker       539,130.8330 (>1000.0)    541,101.7016 (>1000.0)    540,112.9000 (>1000.0)     792.4885 (208.06)     539,850.0599 (>1000.0)   1,218.3888 (551.77)        2;0      1.8515 (0.00)          5           1
test_baseline_benchmark_j1_single_working_checker      5,014,218.0622 (>1000.0)  5,014,878.2153 (>1000.0)  5,014,628.7374 (>1000.0)     271.6341 (71.32)    5,014,589.9523 (>1000.0)     384.3312 (174.05)        1;0      0.1994 (0.00)          5           1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean
=========================== short test summary info ============================
FAILED tests/test_functional.py::test_functional[unused_typing_imports] - Fai...
FAILED tests/test_functional.py::test_functional[star_needs_assignment_target_py35]
FAILED tests/test_functional.py::test_functional[regression_property_no_member_2641]
FAILED tests/test_functional.py::test_functional[missing_kwoa_py3] - Failed: ...
FAILED tests/benchmark/test_baseline_benchmarks.py::TestEstablishBaselineBenchmarks::test_baseline_benchmark_j1_all_checks_lots_of_files
= 5 failed, 1156 passed, 45 skipped, 203 deselected, 11 warnings in 95.68s (0:01:35) =

Expected behavior

All tests should pass.

pylint --version output

Built from the 2.6.0 tag.

@Pierre-Sassoulas
Copy link
Member

Closing as duplicate of #3959

@Pierre-Sassoulas Pierre-Sassoulas added the Duplicate 🐫 Duplicate of an already existing issue label Feb 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Duplicate 🐫 Duplicate of an already existing issue
Projects
None yet
Development

No branches or pull requests

2 participants