Skip to content

Create --querycount parameter #412

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 8 commits into from
Closed

Conversation

renanivo
Copy link
Contributor

@renanivo renanivo commented Oct 29, 2016

It displays a list of top N tests with most queries.

I was inspired by the --duration parameter of pytest and tried to make work as similar as a could.

Note that I used the report sections to store the executed queries. Even though I'm not sure it is the best way to do that, it had a nice side effect: The executed queries are displayed when a test fails.

TODO:

  • capture all connections; if there are multiple this should be reported separately (but for the single nodeid, similar to setup/call/teardown (see 4. below))
  • revisit tests to check if it is using the test DB always really
  • counts appear to be wrong: I get the same with "teardown" as with "call" always (fixed in diff/patch)
  • combine counts per nodeid, but only list counts for setup, call, and teardown in parentheses with it.
  • currently you will get the captured queries in case of failures (and -s I assume?). There could also be a django_db_log_queries mark and fixture (context manager) that would do the same, but then could be used on a single test for debugging. This should get refactored to provide this easily.
  • rebase on master for and refactor with _assert_num_queries

IDEAS:

  • default to 10 for --querycount, i.e. do not require an argument?


result = django_testdir.runpytest_subprocess('--querycount=5')
result.stdout.fnmatch_lines([
'*== top 5 tests with most queries ==*'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test for "test_zero_queries" (not) in the output here!?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I intended just to test the headers here. Do you think I should add tests to the report lines here or the updated version of the other test case is enough?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The update is fine.

)
assert 'test_two_queries' in lines[0]
assert 'test_one_query' in lines[1]
assert 'test_failed' in lines[2]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use 4 here?
Does test_zero_queries show up then?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is just sorting the tests by the number of queries, so test_zero_queries will show up if --querycount=4. Do you think I should remove tests without queries from the report?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure. I was not sure if tests with 0 queries would be included.
So you could increase it here, to test this behavior.
Including them can be useful in case you want to grep the report for tests without queries for example.

assert True
''')

result = django_testdir.runpytest_subprocess('--querycount=3')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use 4 here?
Does test_zero_queries show up then?

@blueyed
Copy link
Contributor

blueyed commented Oct 29, 2016

Very nice feature, thanks!

@renanivo
Copy link
Contributor Author

renanivo commented Nov 1, 2016

Is there anything missing yet?


result = django_testdir.runpytest_subprocess('--querycount=5')
result.stdout.fnmatch_lines([
'*== top 5 tests with most queries ==*'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The update is fine.

@blueyed
Copy link
Contributor

blueyed commented Nov 4, 2016

Is there anything missing yet?

No, but I do not want to merge things without another review, and @pelme seems to be busy with other things currently.

@renanivo
Copy link
Contributor Author

renanivo commented Nov 8, 2016

No problem, take your time.

@peterlauri
Copy link
Contributor

peterlauri commented Nov 24, 2016

👍 tried it out, and I like it...

Does this also include queries made in fixtures? For example if setting up data in a fixture requires plenty of DB queries, does those queries count towards the final count of the test?

@renanivo
Copy link
Contributor Author

@peterlauri I tested with a fixture and its query did not show up in the report. I'll try to figure out a way to include it.

@peterlauri
Copy link
Contributor

peterlauri commented Nov 28, 2016 via email

@renanivo
Copy link
Contributor Author

You're right, it's better not to include it.

@blueyed
Copy link
Contributor

blueyed commented Nov 28, 2016

I agree that not including them by default is good.
But when you want to use this to track down slow tests / tests with lots of queries, it would be good to have them included (with an extra option?).
Maybe this could be hooked into the same mechanism that pytest's --setup-show uses?

@peterlauri
Copy link
Contributor

We most probably have plenty of use cases. Personally I think test execution time is not as important as finding problems in the code it self. Counting fixtures query count can be quite misleading. Maybe have a section for the fixtures them self? But that would just complicate things. Maybe KISS for now, and just let the tests be the scope of this feature.

@peterlauri
Copy link
Contributor

s/Counting/including/

@renanivo
Copy link
Contributor Author

renanivo commented Dec 2, 2016

I'll investigate how --setup-show works but, as @peterlauri said, it could be implemented in another PR.

@renanivo
Copy link
Contributor Author

renanivo commented Dec 2, 2016

... if you don't mind

@renanivo
Copy link
Contributor Author

I just added (# queries executed: N) besides the fixture name when --querycount is used with --setup-show.

@renanivo
Copy link
Contributor Author

renanivo commented Jan 9, 2017

@blueyed, @peterlauri Hey folks, could you take a look at the updates I did at this PR?

@renanivo
Copy link
Contributor Author

I just updated this branch. Let me know if you are still interested in this PR.

@peterlauri
Copy link
Contributor

@renanivo the feature looks good, but I'm not maintainer of this repository so I cannot merge it.

@renanivo
Copy link
Contributor Author

@peterlauri Thank you for the feedback, anyway.

@blueyed
Copy link
Contributor

blueyed commented Feb 17, 2017

Still LGTM, assigning to @pelme.

Should be squash-merged.

@blueyed
Copy link
Contributor

blueyed commented Jul 26, 2018

@renanivo
Have you looked into implementing 3.? (timing)
Or should we keep it out of this PR for now?

@renanivo
Copy link
Contributor Author

Taking it out of this PR would be great. I don't think I'll be able to do that in the short-term.

@blueyed
Copy link
Contributor

blueyed commented Jul 26, 2018

Cool, I've rebased it for now.

Copy link
Contributor

@blueyed blueyed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just gave it a try, but noticed that there are "Failed: Database access not allowed" errors with tests that do not use the "db" marker etc.

 docs/changelog.rst      |  8 ++++++++
 docs/usage.rst          |  7 ++++---
 pytest_django/plugin.py | 43 ++++++++++++++++++++++---------------------
 tests/test_report.py    |  7 +++++++
 4 files changed, 41 insertions(+), 24 deletions(-)

diff --git i/docs/changelog.rst w/docs/changelog.rst
index e2422e6..ca1d0cb 100644
--- i/docs/changelog.rst
+++ w/docs/changelog.rst
@@ -1,6 +1,14 @@
 Changelog
 =========
 
+unreleased
+------------------
+
+Features
+^^^^^^^^
+
+* Added `--querycount` parameter to report N tests with most queries (#412).
+
 3.4.2 (2018-08-20)
 ------------------
 
diff --git i/docs/usage.rst w/docs/usage.rst
index dd37b5c..744fb8b 100644
--- i/docs/usage.rst
+++ w/docs/usage.rst
@@ -32,12 +32,13 @@ Fail tests that render templates which make use of invalid template variables.
 
 ``--querycount`` - show top N tests with most queries
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Show a list of top N tests which executed most queries. Use `--querycount=0`
-to display a list of all tests ordered by the number of queries executed.
+Show a list of top N tests which executed most queries.
+Use `--querycount=0` to display a list of all tests ordered by the number of
+queries executed.  `--no-querycount` can be used to disable it.
 
 Using it in conjunction with `--setup-show` will display the number of
 queries executed by each fixture (when the number of queries executed by the
-fixture is greater than zero). Use `--noquerycount` to force the disable of it.
+fixture is greater than zero).
 
 Running tests in parallel with pytest-xdist
 -------------------------------------------
diff --git i/pytest_django/plugin.py w/pytest_django/plugin.py
index b91e000..2ef9669 100644
--- i/pytest_django/plugin.py
+++ w/pytest_django/plugin.py
@@ -97,7 +97,7 @@ def pytest_addoption(parser):
                      default=None, metavar='N',
                      help='Show top N tests with most queries '
                           '(N=0 for all).')
-    group._addoption('--noquerycount', '--no-querycount',
+    group._addoption('--no-querycount', '--noquerycount',
                      action='store_const', dest='querycount',
                      const=None, default=None,
                      help='Disable --querycount, when both are used.')
@@ -367,40 +367,37 @@ def pytest_fixture_setup(fixturedef, request):
     from django.test.utils import CaptureQueriesContext
     from django.db import connection
 
-    _blocking_manager.unblock()
-
-    try:
-        with CaptureQueriesContext(connection) as context:
+    with _blocking_manager.unblock():
+        try:
+            with CaptureQueriesContext(connection) as context:
+                yield
+        except Exception:
             yield
-    except Exception:
-        yield
-    else:
-        querycount = len(context.captured_queries)
+        else:
+            querycount = len(context.captured_queries)
 
-        if querycount:
-            capman = config.pluginmanager.getplugin('capturemanager')
-            capman.suspend_global_capture()
+            if querycount:
+                capman = config.pluginmanager.getplugin('capturemanager')
+                capman.suspend_global_capture()
 
-            tw = config.get_terminal_writer()
-            tw.write(' (# of queries executed: {})'.format(querycount))
+                tw = config.get_terminal_writer()
+                tw.write(' (# of queries executed: {})'.format(querycount))
 
-            capman.resume_global_capture()
-    finally:
-        _blocking_manager.restore()
+                capman.resume_global_capture()
 
 
 @pytest.hookimpl(hookwrapper=True)
 def pytest_runtest_call(item):
-    count_parameter = item.config.option.querycount
-    if count_parameter is None:
+    if item.config.option.querycount is None:
         yield
         return
 
     from django.test.utils import CaptureQueriesContext
     from django.db import connection
 
-    with CaptureQueriesContext(connection) as context:
-        yield
+    with _blocking_manager.unblock():
+        with CaptureQueriesContext(connection) as context:
+            yield
 
     item.add_report_section('call', 'queries', context.captured_queries)
 
@@ -783,6 +780,10 @@ def block(self):
     def restore(self):
         self._dj_db_wrapper.ensure_connection = self._history.pop()
 
+    def is_blocking(self):
+        return (self._dj_db_wrapper.ensure_connection
+                is not self._real_ensure_connection)
+
 
 _blocking_manager = _DatabaseBlocker()
 
diff --git i/tests/test_report.py w/tests/test_report.py
index 22a1fd5..88a8a52 100644
--- i/tests/test_report.py
+++ w/tests/test_report.py
@@ -11,6 +11,7 @@ def test_zero_queries():
         result.stdout.fnmatch_lines([
             '*== top 5 tests with most queries ==*'
         ])
+        assert result.ret == 0
 
     def test_header_not_set_without_parameter(self, django_testdir):
         django_testdir.create_test_module('''
@@ -20,6 +21,7 @@ def test_zero_queries():
 
         result = django_testdir.runpytest_subprocess()
         assert 'tests with most queries' not in result.stdout.str()
+        assert result.ret == 0
 
     def test_disabled_when_noquerycount_is_also_used(self, django_testdir):
         django_testdir.create_test_module('''
@@ -31,6 +33,7 @@ def test_zero_queries():
             '--querycount=5 --noquerycount'
         )
         assert 'tests with most queries' not in result.stdout.str()
+        assert result.ret == 0
 
     def test_query_optimization_tips_for_the_current_version_of_django(
         self,
@@ -55,6 +58,7 @@ def test_zero_queries():
         )
 
         assert url in result.stdout.str()
+        assert result.ret == 0
 
     def test_querycount_report_lines(self, django_testdir):
         django_testdir.create_test_module('''
@@ -95,6 +99,7 @@ def test_zero_queries():
         assert 'test_one_query' in lines[1]
         assert 'test_failed' in lines[2]
         assert 'test_zero_queries' in lines[3]
+        assert result.ret == 1
 
     def test_report_all_lines_on_querycount_zero(self, django_testdir):
         django_testdir.create_test_module('''
@@ -123,6 +128,7 @@ def test_two_queries():
         )
         assert 'test_two_queries' in lines[0]
         assert 'test_one_query' in lines[1]
+        assert result.ret == 0
 
     def test_should_report_fixture_queries(self, django_testdir):
         django_testdir.create_test_module('''
@@ -145,3 +151,4 @@ def test_without_queries(one_query):
         )
 
         assert '(# of queries executed: 1)' in result.stdout.str()
+        assert result.ret == 0

I've came up with the following diff for now, but it seems like pytest_runtest_call is used before the test DB is configured, and therefore it tries to use '/should_not_be_accessed' for the SQLite database name.

I also think it should capture all connections, i.e. iterate over django.db.connections.

@blueyed
Copy link
Contributor

blueyed commented Sep 8, 2018

The following works:

 docs/changelog.rst      |  8 ++++++++
 docs/usage.rst          |  7 ++++---
 pytest_django/plugin.py | 41 ++++++++++++++++++++++-------------------
 tests/test_report.py    |  9 ++++++++-
 4 files changed, 42 insertions(+), 23 deletions(-)

diff --git i/docs/changelog.rst w/docs/changelog.rst
index e2422e6..ca1d0cb 100644
--- i/docs/changelog.rst
+++ w/docs/changelog.rst
@@ -1,6 +1,14 @@
 Changelog
 =========
 
+unreleased
+------------------
+
+Features
+^^^^^^^^
+
+* Added `--querycount` parameter to report N tests with most queries (#412).
+
 3.4.2 (2018-08-20)
 ------------------
 
diff --git i/docs/usage.rst w/docs/usage.rst
index dd37b5c..744fb8b 100644
--- i/docs/usage.rst
+++ w/docs/usage.rst
@@ -32,12 +32,13 @@ Fail tests that render templates which make use of invalid template variables.
 
 ``--querycount`` - show top N tests with most queries
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Show a list of top N tests which executed most queries. Use `--querycount=0`
-to display a list of all tests ordered by the number of queries executed.
+Show a list of top N tests which executed most queries.
+Use `--querycount=0` to display a list of all tests ordered by the number of
+queries executed.  `--no-querycount` can be used to disable it.
 
 Using it in conjunction with `--setup-show` will display the number of
 queries executed by each fixture (when the number of queries executed by the
-fixture is greater than zero). Use `--noquerycount` to force the disable of it.
+fixture is greater than zero).
 
 Running tests in parallel with pytest-xdist
 -------------------------------------------
diff --git i/pytest_django/plugin.py w/pytest_django/plugin.py
index b91e000..6a0f0f2 100644
--- i/pytest_django/plugin.py
+++ w/pytest_django/plugin.py
@@ -97,7 +97,7 @@ def pytest_addoption(parser):
                      default=None, metavar='N',
                      help='Show top N tests with most queries '
                           '(N=0 for all).')
-    group._addoption('--noquerycount', '--no-querycount',
+    group._addoption('--no-querycount', '--noquerycount',
                      action='store_const', dest='querycount',
                      const=None, default=None,
                      help='Disable --querycount, when both are used.')
@@ -367,32 +367,35 @@ def pytest_fixture_setup(fixturedef, request):
     from django.test.utils import CaptureQueriesContext
     from django.db import connection
 
-    _blocking_manager.unblock()
-
-    try:
-        with CaptureQueriesContext(connection) as context:
+    with _blocking_manager.unblock():
+        try:
+            with CaptureQueriesContext(connection) as context:
+                yield
+        except Exception:
             yield
-    except Exception:
-        yield
-    else:
-        querycount = len(context.captured_queries)
+        else:
+            querycount = len(context.captured_queries)
 
-        if querycount:
-            capman = config.pluginmanager.getplugin('capturemanager')
-            capman.suspend_global_capture()
+            if querycount:
+                capman = config.pluginmanager.getplugin('capturemanager')
+                capman.suspend_global_capture()
 
-            tw = config.get_terminal_writer()
-            tw.write(' (# of queries executed: {})'.format(querycount))
+                tw = config.get_terminal_writer()
+                tw.write(' (# of queries executed: {})'.format(querycount))
 
-            capman.resume_global_capture()
-    finally:
-        _blocking_manager.restore()
+                capman.resume_global_capture()
 
 
 @pytest.hookimpl(hookwrapper=True)
 def pytest_runtest_call(item):
-    count_parameter = item.config.option.querycount
-    if count_parameter is None:
+    if item.config.option.querycount is None:
+        yield
+        return
+
+    if ('django_db_setup' not in item.fixturenames
+            # XXX: while "db" gets requested by the marker, it is not in
+            # item.fixturenames! (pytest issue)
+            and not item.get_closest_marker('django_db')):
         yield
         return
 
diff --git i/tests/test_report.py w/tests/test_report.py
index 22a1fd5..fdc7a81 100644
--- i/tests/test_report.py
+++ w/tests/test_report.py
@@ -11,6 +11,7 @@ def test_zero_queries():
         result.stdout.fnmatch_lines([
             '*== top 5 tests with most queries ==*'
         ])
+        assert result.ret == 0
 
     def test_header_not_set_without_parameter(self, django_testdir):
         django_testdir.create_test_module('''
@@ -20,6 +21,7 @@ def test_zero_queries():
 
         result = django_testdir.runpytest_subprocess()
         assert 'tests with most queries' not in result.stdout.str()
+        assert result.ret == 0
 
     def test_disabled_when_noquerycount_is_also_used(self, django_testdir):
         django_testdir.create_test_module('''
@@ -28,9 +30,10 @@ def test_zero_queries():
         ''')
 
         result = django_testdir.runpytest_subprocess(
-            '--querycount=5 --noquerycount'
+            '--querycount=5', '--noquerycount'
         )
         assert 'tests with most queries' not in result.stdout.str()
+        assert result.ret == 0
 
     def test_query_optimization_tips_for_the_current_version_of_django(
         self,
@@ -55,6 +58,7 @@ def test_zero_queries():
         )
 
         assert url in result.stdout.str()
+        assert result.ret == 0
 
     def test_querycount_report_lines(self, django_testdir):
         django_testdir.create_test_module('''
@@ -95,6 +99,7 @@ def test_zero_queries():
         assert 'test_one_query' in lines[1]
         assert 'test_failed' in lines[2]
         assert 'test_zero_queries' in lines[3]
+        assert result.ret == 1
 
     def test_report_all_lines_on_querycount_zero(self, django_testdir):
         django_testdir.create_test_module('''
@@ -123,6 +128,7 @@ def test_two_queries():
         )
         assert 'test_two_queries' in lines[0]
         assert 'test_one_query' in lines[1]
+        assert result.ret == 0
 
     def test_should_report_fixture_queries(self, django_testdir):
         django_testdir.create_test_module('''
@@ -145,3 +151,4 @@ def test_without_queries(one_query):
         )
 
         assert '(# of queries executed: 1)' in result.stdout.str()
+        assert result.ret == 0

@blueyed
Copy link
Contributor

blueyed commented Sep 8, 2018

Filed pytest-dev/pytest#3959 for the XXX in the diff above.

@blueyed
Copy link
Contributor

blueyed commented Sep 8, 2018

@renanivo
Please let me know if I should finish / push this, and if I can use your branch (master!) for this. Otherwise I would create a new PR.

@auvipy
Please re-review.

@blueyed
Copy link
Contributor

blueyed commented Sep 8, 2018

On top:

This fixes the count for teardown being the same as with call / keeps them separate.

commit 0de5df0 (HEAD -> querycount)
Author: Daniel Hahler <[email protected]>
Date:   Sat Sep 8 22:59:37 2018 +0200

    fixup! WIP: https://github.com/pytest-dev/pytest-django/pull/412#issuecomment-419659376

diff --git a/pytest_django/plugin.py b/pytest_django/plugin.py
index 6a0f0f2..aca7b8f 100644
--- a/pytest_django/plugin.py
+++ b/pytest_django/plugin.py
@@ -350,11 +350,15 @@ def _restore_class_methods(cls):
         cls.tearDownClass = tearDownClass
 
 
+@pytest.hookimpl(hookwrapper=True)
 def pytest_runtest_setup(item):
     if django_settings_is_configured() and is_django_unittest(item):
         cls = item.cls
         _disable_class_methods(cls)
 
+    with _handle_query_count(item, 'setup'):
+        yield
+
 
 @pytest.hookimpl(hookwrapper=True)
 def pytest_fixture_setup(fixturedef, request):
@@ -385,9 +389,8 @@ def pytest_fixture_setup(fixturedef, request):
 
                 capman.resume_global_capture()
 
-
-@pytest.hookimpl(hookwrapper=True)
-def pytest_runtest_call(item):
+@contextlib.contextmanager
+def _handle_query_count(item, when):
     if item.config.option.querycount is None:
         yield
         return
@@ -399,13 +402,30 @@ def pytest_runtest_call(item):
         yield
         return
 
+    if when == 'setup':
+        # Ensure DB is setup, and connection points at test DB.
+        item._request.getfixturevalue('django_db_setup')
+
     from django.test.utils import CaptureQueriesContext
     from django.db import connection
 
-    with CaptureQueriesContext(connection) as context:
+    with _blocking_manager.unblock():
+        with CaptureQueriesContext(connection) as context:
+            yield
+
+    item.add_report_section(when, 'queries', context.captured_queries)
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_call(item):
+    with _handle_query_count(item, 'call'):
         yield
 
-    item.add_report_section('call', 'queries', context.captured_queries)
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_teardown(item):
+    with _handle_query_count(item, 'teardown'):
+        yield
 
 
 def pytest_terminal_summary(terminalreporter):
@@ -424,23 +444,29 @@ def pytest_terminal_summary(terminalreporter):
 
     def get_query_count(report):
         sections = dict(report.sections)
-        return len(sections.get('Captured queries call', []))
+        when = report.when
+        return len(sections.get('Captured queries ' + when, []))
 
     reports = (
+        terminalreporter.stats.get('error', []) +
         terminalreporter.stats.get('failed', []) +
         terminalreporter.stats.get('passed', [])
     )
 
-    reports.sort(key=get_query_count)
-    reports.reverse()
+    counts = []
+    for report in reports:
+        for section in report.sections:
+            for when in ('setup', 'call', 'teardown'):
+                if section[0] == 'Captured queries ' + when:
+                    counts.append((len(section[1]), when, report))
+    counts.sort(key=lambda x: (-x[0], x[1], x[2].nodeid))
 
-    for report in reports[reports_slice]:
-        count = get_query_count(report)
+    for (count, when, report) in counts[reports_slice]:
         nodeid = report.nodeid.replace("::()::", "::")
 
         terminalreporter.write_line('{count: <4} {when: <8} {nodeid}'.format(
             count=count,
-            when=report.when,
+            when=when,
             nodeid=nodeid
         ))
 
diff --git a/tests/test_report.py b/tests/test_report.py
index fdc7a81..b88eb48 100644
--- a/tests/test_report.py
+++ b/tests/test_report.py
@@ -65,21 +65,31 @@ def test_querycount_report_lines(self, django_testdir):
             import pytest
             from django.db import connection
 
+            @pytest.fixture
+            def fix(db):
+                with connection.cursor() as cursor:
+                    cursor.execute('SELECT 1')
+                    yield
+                    cursor.execute('SELECT 4')
+                    cursor.execute('SELECT 5')
+                    cursor.execute('SELECT 6')
+
+            def test_123_queries(fix):
+                with connection.cursor() as cursor:
+                    cursor.execute('SELECT 2')
+                    cursor.execute('SELECT 3')
+
             @pytest.mark.django_db
             def test_one_query():
                 with connection.cursor() as cursor:
                     cursor.execute('SELECT 1')
 
-                assert True
-
             @pytest.mark.django_db
             def test_two_queries():
                 with connection.cursor() as cursor:
                     cursor.execute('SELECT 1')
                     cursor.execute('SELECT 1')
 
-                assert True
-
             @pytest.mark.django_db
             def test_failed_one_query():
                 with connection.cursor() as cursor:
@@ -88,18 +98,19 @@ def test_failed_one_query():
                 assert False
 
             def test_zero_queries():
-                assert True
+                pass
         ''')
 
-        result = django_testdir.runpytest_subprocess('--querycount=4')
-        lines = result.stdout.get_lines_after(
-            '*top 4 tests with most queries*'
-        )
-        assert 'test_two_queries' in lines[0]
-        assert 'test_one_query' in lines[1]
-        assert 'test_failed' in lines[2]
-        assert 'test_zero_queries' in lines[3]
+        result = django_testdir.runpytest_subprocess('--querycount=10', '-s')
         assert result.ret == 1
+        result.stdout.fnmatch_lines([
+            '*=== top 10 tests with most queries ===*',
+            '2 * call *test_123_queries*',
+            '2 * call *test_two_queries*',
+            '1 * call *test_failed_one_query*',
+            '1 * call *test_one_query*',
+            '1 * setup *test_123_queries*',
+        ])
 
     def test_report_all_lines_on_querycount_zero(self, django_testdir):
         django_testdir.create_test_module('''
@@ -111,24 +122,20 @@ def test_one_query():
                 with connection.cursor() as cursor:
                     cursor.execute('SELECT 1')
 
-                assert True
-
             @pytest.mark.django_db
             def test_two_queries():
                 with connection.cursor() as cursor:
                     cursor.execute('SELECT 1')
                     cursor.execute('SELECT 1')
-
-                assert True
         ''')
 
         result = django_testdir.runpytest_subprocess('--querycount=0')
-        lines = result.stdout.get_lines_after(
-            '*top tests with most queries*'
-        )
-        assert 'test_two_queries' in lines[0]
-        assert 'test_one_query' in lines[1]
         assert result.ret == 0
+        result.stdout.fnmatch_lines([
+            '===* top tests with most queries *===',
+            '2 * call * tpkg/test_the_test.py::test_two_queries',
+            '1 * call * tpkg/test_the_test.py::test_one_query',
+        ])
 
     def test_should_report_fixture_queries(self, django_testdir):
         django_testdir.create_test_module('''
@@ -149,6 +156,7 @@ def test_without_queries(one_query):
             '--setup-show',
             '--querycount=5'
         )
-
-        assert '(# of queries executed: 1)' in result.stdout.str()
         assert result.ret == 0
+        result.stdout.fnmatch_lines([
+            '* SETUP * F one_query (# of queries executed: 1)',
+        ])

@blueyed
Copy link
Contributor

blueyed commented Mar 13, 2019

Is this dead?

@blueyed blueyed removed this from the 3.4 milestone Mar 13, 2019
@renanivo
Copy link
Contributor Author

@blueyed To be honest, I tried to finish this pull request and apply the changes you proposed, but got into a lot of problems. IMO, If you could apply it for me, would be the best course of action.

@renanivo
Copy link
Contributor Author

renanivo commented May 7, 2020

I'm closing this PR, as it was opened more than 3 years ago and it is stale for more than 1 year.

@renanivo renanivo closed this May 7, 2020
beyondgeeks pushed a commit to beyondgeeks/django-pytest that referenced this pull request Sep 6, 2022
beyondgeeks pushed a commit to beyondgeeks/django-pytest that referenced this pull request Sep 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants