Skip to content

When using "-n auto", count the number of physical CPU cores instead of logical ones #553

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
utapyngo opened this issue Jul 8, 2020 · 7 comments · Fixed by #560
Closed

Comments

@utapyngo
Copy link
Contributor

utapyngo commented Jul 8, 2020

I have a laptop with 6 physical CPU cores and hyper-threading. The code from auto_detect_cpus gives 12:

from os import sched_getaffinity
len(sched_getaffinity(0))
Out[24]: 12

The optimal number of parallel processes though is 6. Tests run slower when using 12 processes.
Did you consider using psutil.cpu_count(logical=False)?

@wei-hai
Copy link

wei-hai commented Jul 21, 2020

+1

@RonnyPfannschmidt
Copy link
Member

Psutil has not been considered as dependency yet

@Zac-HD
Copy link
Member

Zac-HD commented Jul 22, 2020

For what it's worth this would be very useful for me too, and I wouldn't mind adding a dependency on psutil to support it.

@nicoddemus
Copy link
Member

@RonnyPfannschmidt anything against depending on psutil? Seems like a pretty popular and stable package.

Even if we decide to no depend on it explicitly, we can optionally use cpu_count(logical=False) only when psutil is available.

@RonnyPfannschmidt
Copy link
Member

There ist nothin against it

@AdrienPensart
Copy link

AdrienPensart commented Aug 18, 2020

Guys you changed something important there, now, how can we specify $(nproc) in pytest.ini or pyproject.toml ?
If -n auto does not work anymore, the tests are not adapted when ran on different kind of machines :/
If tests are IO-bounded, and not CPU-bounded, it drastically ruin testing time.

maybe a better option like "-n [auto|ncpus|ncores|{n}]" could be nice ?

@nicoddemus
Copy link
Member

Hi @AdrienPensart,

If -n auto does not work anymore, the tests are not adapted when ran on different kind of machines :/

-n auto should still be working... can you open an issue if it is not working for you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants