-
Notifications
You must be signed in to change notification settings - Fork 614
[Easy and help welcome] Migrating all tests from run_all_in_graph_and_eager_mode to the new pytest fixture. #1328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I suggest using And as for lack of graph test, I suggest every ops should add |
@fsx950223 you're right we need more discussion about when and how to use tf.function. We may sometimes have complexe signatures (when multiple dtypes are accepted for exemple) and it's hard to express with |
@fsx950223 I tried recreating this error and the problem arose at the arguments passed to the function(s). Once |
Yeah, it's a problem. You could fix the bug by decorating target function with |
Why would you disable autograph in |
Thanks @autoih for all the help! |
@gabrieldemarmiesse @autoih Much thanks to both of you for all of your work in accomplishing this! The new test suite is significantly easier to work with and more understandable from a graph vs. eager perspective. |
…sorflow#1405) * Moved test out of run_all_in_graph_and_eager_mode in softshrink. See tensorflow#1328 * Small fix.
…ensorflow#1430) * Moved test out of run_all_in_graph_and_eager_mode sparse_image_wrap. See tensorflow#1328 * Update tensorflow_addons/image/sparse_image_warp_test.py
This issue is to track the progression. Help is welcome, we have a lot of work to do but it's fairly easy. You can take example on the pull requests made before.
The idea is that before, using the
@run_all_in_graph_and_eager_mode
meant that we needed to make the tests with the tensorflow 1.x style, with things liketf.compat.v1.initialize_global_variables
,self.evaluate
and other scary functions like that.Now we use the
@pytest.mark.usefixtures("maybe_run_functions_eagerly")
which will run the test function twice. Once normally, and once withtf.config.experimental_run_functions_eagerly(True)
. Which means the tests are eager-first. No need to initialize variables, get the result with.numpy()
. To test the values, use the numpy testing module instead of the methods oftf.test.TestCase
likeself.AssertAllClose
and such.You can take as reference the two pull request already made:
#1288
#1327
When doing a pull request, please do not do more than 2 tests at once.
The policy is that when we're testing a simple function (for example, using a custom op), no need to use
tf.functions
and no need to use@pytest.mark.usefixtures("maybe_run_functions_eagerly")
because we're not afraid of some python side effects.When working with complex functions (for loops, if/else with tensors...) we need to add tf.function and to add the
@pytest.mark.usefixtures("maybe_run_functions_eagerly")
to make sure we don't rely on some pythonic behavior. To avoid having to make complexinput_signature
intf.function
we can isolate the sensitive part (if/else/for loop with tensors) in a separatetf.function
and not decorate the main one.The text was updated successfully, but these errors were encountered: