-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Q: Categorizing and grouping dynamically-generated tests? #3587
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
GitMate.io thinks possibly related issues are #2424 (dynamically generated fixtures), #3100 (Cannot dynamically mark a test as xfail), #2550 (Parametrized tests are grouped across files/modules), #3070 (Generate tests parametrizing many cases), and #2519 (Dynamically generated test methods not distinguishable on failure). |
pytest currently has no concept of tests that form chains you can alter this in part based on pytest_modifyiteems if you have a reliable way to identify use-cases, you can use that to reorder the items after pytest did the fixture optimization |
Thanks for the response. I ended up doing the following. First I needed to figure out exactly what was in the 'Function' objects:
This ended up spitting out all of the contents of each 'Function' object that was the tests being collected, and then after identifying the correct attribute, I sorted on it:
The _genid was the attribute containing the dynamically generated use_case names I was passing into the fixture. So now the output/execution order is correct at least:
I'll keep looking at the docs, but now my next goal is to separate / organize the output of the tests by use_case as well. For example, in JavaScript mocha tests, you have the following syntax:
And the result off running that file comes out to be:
Is there an easy way to achieve this kind of output? Could you point me to the relevant docs? (there are a lot and I'm totally new to this framework). My ideal output for my tests would be something like below:
Thanks for your time, and excellent work on PyTest to the whole team. |
Thanks @mvxt, I believe we can close this now. |
So I have a testing scenario as follows:
I have use cases I am testing, and all of them use the same "steps" that need to be tested, with each subsequent step depending on the success of the previous. My requirements are as follows:
For example, if I have use cases A and B, and test methods 1, 2, and 3 for each, I expect things to happen in this order:
A1 -> A2 -> A3, then
B1 -> B2 -> B3
OR B#... then A#.
Order of use_case execution doesn't matter. But I cannot have test 1 running for both A and B. All of As tests have to run then all of Bs. And the output needs to be organized this way as well (I want to see all tests for A together, followed by all tests for B, etc.).
So I've created a parametrized test fixture, and I've created a test class marked as incremental. It looks like this:
In my
conftest.py
file, I've generated my needed use cases dynamically and I pass them in as follows:However, when I execute it, the output looks like this:
To verify it wasn't just the pytest output, I modified the tests to print those lines to a shared file. The file looks like this:
I purposefully generated a failure in test function 2 and function 3 auto-fails for both use cases A and B as expected.
But as it stands, I've only met two (2) of my above four (4) conditions.
How do I enforce all tests (functions) for a use case (class) to run before another class can run?
Machine Specs:
platform darwin -- Python 2.7.15, pytest-3.6.1, py-1.5.3, pluggy-0.6.0 -- /usr/local/opt/python@2/bin/python2.7
The text was updated successfully, but these errors were encountered: