-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Refactored and fixed flaky resize tests #3907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactored and fixed flaky resize tests #3907
Conversation
49b9ba0
to
5372b13
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comment below. Otherwise LGTM! Thanks Victor.
Co-authored-by: Philip Meier <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @vfdev-5 ! Some minor comments but LGTM mostly
@pytest.mark.parametrize('interpolation', [BILINEAR, BICUBIC, NEAREST]) | ||
def test_resize(device, dt, size, max_size, interpolation, tester): | ||
|
||
torch.manual_seed(12) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have to set the seed? If yes maybe add a comment with a potential FIXME?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should create deterministic and random input instead purely random one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, we generally don't do that, but I agree with you that it's better practice when we know the test aren't flaky.
Another option is to actually parametrize the seed, which we can do if the tests are very fast.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise I would put it module-wise. Seed as parameter is something new to me :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is already setting the seed module-wise, at least for the tests that will be run after this one.
In the long term we'll use something local like rng = np.random.RandomState(0)
(surely there's a pytorch equivalent?) and generate stuff from the local rng
variable, but for now this is good enough
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the long term we'll use something local like rng = np.random.RandomState(0) (surely there's a pytorch equivalent?) and generate stuff from the local rng variable, but for now this is good enough
Yes, something like that : https://pytorch.org/docs/stable/generated/torch.Generator.html#torch.Generator.manual_seed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all tests that do not perform a loop and thus only need one seed per test invocation, we could use an autouse fixture that is defined in the root conftest.py
import pytest
@pytest.fixture(scope="function", autouse=True)
def random_seed():
torch.manual_seed(12)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for your quick fixes.
Summary: Co-authored-by: Philip Meier <[email protected]> Reviewed By: vincentqb, cpuhrsch Differential Revision: D28679988 fbshipit-source-id: 0851bd362c25128f143216c063b13ba4ef6c88f1
Addresses #3906 (comment)
Description: