Skip to content

Exit code consolidation #663

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
cr-omermazig opened this issue May 18, 2021 · 11 comments
Open

Exit code consolidation #663

cr-omermazig opened this issue May 18, 2021 · 11 comments

Comments

@cr-omermazig
Copy link

We don't know in advance how many workers we need for the tests (it depends on information we calculate in pytest_generate_tests), so if we have too many workers, we skip the redundant ones (as suggested here - https://stackoverflow.com/questions/33400071/skip-parametrized-tests-generated-by-pytest-generate-tests-at-module-level) and it works

Problem now is the exit code. For some reason, if I have some workers which returns exitstatus 0 (OK), and some workers which returns exitstatus 5 (NO_TESTS_COLLECTED), the general return code from the pytest run (from master) is 5 (where I want it to be 0, because all of the collected tests passed).

Is that a bug or the desired behavior? And if it's the latter, any ideas to how can I accomplish the behavior I want?

@RonnyPfannschmidt
Copy link
Member

the code in the stackoverflow is incorrect

the skip will happen instantly as pytest.skip raises a exception - it NEVER has a return value

this in turn will destroy test collection and get you exit code 5

as its not clear what you do with your workers, applying skip in some of them is absolutely wrong

please describe the actual desired behaviour

@cr-omermazig
Copy link
Author

Thank you for the quick response!

Per pytest run, I can't know in advance how many workers I need. We have a configuration that is generated dynamically and determines that. So I only find out whether I need the worker or not once I'm already inside it (specifically, I need information from session.items to determine whether I need to run something on this worker or not, and it's only available from pytest_generate_tests, which of course runs for every worker).

Because of that, I want to somehow stop/disable/ignore the worker from pytest_generate_tests. The only way I found was to skip the tests, but that caused the problem described above.

In summation - The desired behavior is to be able to make a worker stop running from inside the pytest_generate_tests fixture (or from another advanced fixture where I already have an initialized session object.

@RonnyPfannschmidt
Copy link
Member

The structure you envision is not supportable in xdist, you need to decide on the control process, for the workers it's too late

@cr-idanhaim
Copy link

Hi @RonnyPfannschmidt
I'm Omer's colleague.
What is the latest pytest/xdist hooks we can use to send the -n flag in order to set the number of workers dynamically?
I found this example: pytest-dev/pytest#3302 (comment), but it's to early for our configuration (as Omer mentioned above, we need the session.items).
Also, can we control/change the exitstatus of the worker at the end of the life cycle (we though we change it in pytest_sessionfinish hook but it doesn't affect the exitstatus)? If so, what is the appropriate pytest/xdist hook to do that?

@RonnyPfannschmidt
Copy link
Member

@cr-idanhaim your use-case is not clear to me, the problem is that the process that will set the -n will never see the items

unless i have a better idea of what you actually do i cant give any sensible solution

right now there are no hooks you can use, the thing you want to do is simply not working in xdist as it is now

@cr-idanhaim
Copy link

Thanks @RonnyPfannschmidt for the quick response.
I'll elaborate our use case:
We run test based on quantity of VMs, the number of VMs to tests is determined in our code dynamically (after running the pytest command).
Due to that, we don't know in advance the number of VMs to test so we set a large -n flag (for example, -n 10 when actually we have only 2 VMs to test). Also we use --dist=each flag (I don't if it's important to mention that)
We would like to disable/ignore the redundant workers so now we use pytest.skip but as you mentioned it's not the correct way.
As you mentioned, it's not possible in xdist as it's now so I think you can close this issue.

@RonnyPfannschmidt
Copy link
Member

can you elaborate how you determine the number of vm's ? im under the impression this could be done differently

also if you run all tests against vm's - how do you determine the vm's?

@cr-idanhaim
Copy link

We have a config.py file contains a class that contains a dictionary with list of VM's, for example:

class CiConfig():
    def __init__(self):
        self.machines = dict()
        self.machines['endpoint'] = [VM(ip='1.1.1.1'), VM(ip='2.2.2.2')]

In pytest_generate_tests hook, we retrieve the machines and parametrize a fixture based on those VM's.

@RonnyPfannschmidt
Copy link
Member

You can just use that to populate something for invoking pytest itself

Then you can have a try first configure hook and just set - n as required

@cr-idanhaim
Copy link

Do you mean to wrap the pytest command with a script retrieving the number of VM's? If so, do we need to use pytest.main in this case (https://docs.pytest.org/en/stable/usage.html#calling-pytest-from-python-code)?

@RonnyPfannschmidt
Copy link
Member

Depending on the setup, it's potentially OK to just do it in pytest_configure

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants