You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes when running tests with semi-random data (e.g. from factory_boy), you know the test will fail if the generated data matches some condition. Rather than execute the test when you know it will fail, it would be nice if you could tell pytest to re-run that test, probably configurable up to N times. For example:
deftest_something_with_different_people():
person1, person2=PersonFactory.build_batch(2)
# if this is False, rerun the test up to N timespytest.assume(person1.name!=person2.name)
# some test that depends on the condition being true
GitMate.io thinks possibly related issues are #2139 (Pytest own tests and hypothesis requirement), #2635 (test logging interaction no longer works due to hypothesis usage), #1946 (Test Generator similar to pythoscope?), #916 (Deep integration between Hypothesis and py.test is currently impossible), and #2946 (Failing pytester tests with features branch).
Sometimes when running tests with semi-random data (e.g. from factory_boy), you know the test will fail if the generated data matches some condition. Rather than execute the test when you know it will fail, it would be nice if you could tell pytest to re-run that test, probably configurable up to N times. For example:
This is inspired by
assume()
from hypothesis.The text was updated successfully, but these errors were encountered: