You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bumping the priority of this as it's quite useful. For performance tests, it'd be useful to mark a specific performance variable as an expected failure. Also we should be able to attach a reason for the expected failure (e.g., "known bug X") similar to the various skip() methods.
At the low-level there should an xfail() test method very similar to the skip() that will raise a specific exception denoting the expected failure with a custom message. This will be handled from the runtime similarly to the way the skip exception is handled: print an "XFAIL" note instead of "OK" and add the user message.
The xfail() method would work nicely for sanity checking, as it could be called from within the @sanity_function, but this is not the case for performance variables, which are checked against their references at a later point by the framework. For this reason, we should also have a builtin @xfail decorator. This could also be used to decorate a @sanity_function.
This @xfail decorator should come in the following two forms:
In the first form, we can support some pseudo variables that we expect to be used often, such as $sysenv to translate to the system/environment combination of the test etc.
In the second form, the when argument is a callable that will be called by reframe with the test instance as its argument and should return True or False to determine whether the test is an expected failure or not.
Often performance variables are defined explicitly by setting the perf_variables variable without using the @performance_function decorator. In this case, the following construct could be used once the perf_variables dictionary is populated:
This essentially applies explicitly the decorator to the performance function stored in a specific "perf variable". When using the xfail builtin like this, the when argument could be replaced with a tautology and the condition check moved outside:
I have some second thoughts on the above proposal. Decorating the @performance_function is not going to work for tests deriving from base library tests.
My idea is that the protocol level method will be the actual implementation of what the decorator will translate into, so both will be equivalent. The @xfail is simply syntactic sugar.
As for how the expected failures will be handled, my original proposal is still valid.
Activity
vkarak commentedon Dec 19, 2024
Bumping the priority of this as it's quite useful. For performance tests, it'd be useful to mark a specific performance variable as an expected failure. Also we should be able to attach a reason for the expected failure (e.g., "known bug X") similar to the various
skip()
methods.vkarak commentedon Mar 20, 2025
For this I was thinking the following API:
At the low-level there should an
xfail()
test method very similar to theskip()
that will raise a specific exception denoting the expected failure with a custom message. This will be handled from the runtime similarly to the way the skip exception is handled: print an "XFAIL" note instead of "OK" and add the user message.The
xfail()
method would work nicely for sanity checking, as it could be called from within the@sanity_function
, but this is not the case for performance variables, which are checked against their references at a later point by the framework. For this reason, we should also have a builtin@xfail
decorator. This could also be used to decorate a@sanity_function
.This
@xfail
decorator should come in the following two forms:@xfail("message", when={'var': 'value', ...})
@xfail("message", when=lambda t: t.var == 'value')
In the first form, we can support some pseudo variables that we expect to be used often, such as
$sysenv
to translate to the system/environment combination of the test etc.In the second form, the
when
argument is a callable that will be called by reframe with the test instance as its argument and should returnTrue
orFalse
to determine whether the test is an expected failure or not.Often performance variables are defined explicitly by setting the
perf_variables
variable without using the@performance_function
decorator. In this case, the following construct could be used once theperf_variables
dictionary is populated:This essentially applies explicitly the decorator to the performance function stored in a specific "perf variable". When using the
xfail
builtin like this, thewhen
argument could be replaced with a tautology and the condition check moved outside:vkarak commentedon Mar 21, 2025
I have some second thoughts on the above proposal. Decorating the
@performance_function
is not going to work for tests deriving from base library tests.vkarak commentedon Apr 1, 2025
Here is a second iteration. For failing performance, the best way is to encode it in the reference tuple, something like this:
As for sanity, we could either use a class-level decorator taking optionally a callable to determine the condition:
Or use a custom protocol function that the framework will call when sanity failure occurs to determine if it's expected or not:
My idea is that the protocol level method will be the actual implementation of what the decorator will translate into, so both will be equivalent. The @xfail is simply syntactic sugar.
As for how the expected failures will be handled, my original proposal is still valid.
XFAIL
status does not update and does not show the test case counter #3525