-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
BENCH: skip benchmarks instead of hiding them when SCIPY_XSLOW=0 #12732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The "xslow" benchmarks should not be hidden from asv when the environment variable is not set, because then any results obtained for them appear to be for non-existing benchmarks, and may be deleted and will not be included in reporting. Instead, skip the benchmarks with the usual mechanism.
Thanks for fixing this. |
TLDR: I have three problems running benchmarks on Windows. (They're not unique to this PR, so let me know if you'd like them posted somewhere else.)
I installed ASV with conda (
(same error as with I uninstalled ASV and reinstall with |
I've never seen the FatalError, asv CI does run with conda on Windows.
The errors with run.py and runtests.py are issues in the Scipy run/runtest.py scripts. For the openblas build issue, setting OPENBLAS env var or putting a suitable site.cfg in %HOME%\.numpy-site.cfg are the alternatives.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
asv check
is passing in CI at least; the logic is a little complicated, but the idea makes sense to me; not really surprising that issues can crop up when people try to run asv
themselves, it can take a little tweaking on a given machine
There are no Travis CI failures; the Azure failures are fixed in master
after #12730
Oops yes I meant Azure. |
try: | ||
self.numtrials = int(os.environ['SCIPY_GLOBAL_BENCH_NUMTRIALS']) | ||
except (KeyError, ValueError): | ||
self.numtrials = 100 | ||
|
||
self.dump_fn = os.path.join(os.path.dirname(__file__), '..', 'global-bench-results.json') | ||
self.results = {} | ||
|
||
def setup(self, name, ret_value, solver): | ||
if not self.enabled: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if not is_xslow():
_enabled_functions = ['AMGM']
above, suggesting that AMGM
is supposed to run even when SCIPY_XSLOW
is not 1, yet:
self.enabled = is_xslow()
...
if not self.enabled:
...
raise NotImplementedError()
so none of the global benchmarks run when SCIPY_XSLOW
is not 1. I think I'm reading correctly, as no benchmarks run locally when SCIPY_XSLOW
is not 1, and there are no results for GlobalBench on Pauli's server (AMGM
is listed, but there is no data).
Is this intentional, or should a subset of the benchmarks run when SCIPY_XSLOW
is not 1?
Yes, it could be an empty list instead. Or, if the benchmark is fast
enough, it could also make sense to enable it by default.
|
OK, I can pick some out so that it takes a certain amount of time. Approximately how long should it take to run the global optimization benchmarks when |
The "xslow" benchmarks should not be hidden from asv when the
environment variable is not set, because then any results obtained for
them appear to be for non-existing benchmarks, and may be deleted and
will not be included in reporting. Instead, skip the benchmarks with the
usual mechanism.