-
Notifications
You must be signed in to change notification settings - Fork 614
Alternative to run_all_in_graph_and_eager_mode #1288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative to run_all_in_graph_and_eager_mode #1288
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks this definitely simplifies the testing for new contributors! I think this should work since the only support for graph mode in 2.x is tf.function
. It's possible some people will want to use Addons using tf.compat.v1 but to me that's outside of our scope.
Couple of clarifying questions, and then I also wish we had a better understanding of the keras black magic. Checking the metrics module it says it'll run as graph but quite difficult for me to see where that is happening:
https://github.com/tensorflow/tensorflow/blob/v2.1.0/tensorflow/python/keras/metrics.py#L218
It's possible that this isn't being controlled with tf.function in which case I think the tests should fail with or without the experimental config so probably a non-issue.
tensorflow_addons/conftest.py
Outdated
@pytest.fixture(scope="function", params=["eager_mode", "tf_function"]) | ||
def maybe_run_functions_eagerly(request): | ||
if request.param == "eager_mode": | ||
tf.config.experimental_run_functions_eagerly(True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I agree this is preferable to a private API, but just want to note it is still experimental and could change on us:
tensorflow/community#218
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we communicate to the TF team that this flag is really needed on our side? So that it gets out of experimental mode? It's a much needed feature for debugging in general. CF #13 (comment)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks!
* Alternative to run_all_in_graph_and_eager_mode. * Better names when printing stuff. * Give another example. * Get finalizer out. * Moved the maybe_run_functions_eagerly to test_utils.
So, replacing this
run_all_in_graph_and_eager_mode
is quite difficult but it's worth it for two reasons:call
method, for the metrics/losses, it's theupdate_state
...). It allows us to enjoy the eager mode outside those selected functions and we don't have to doself.session
self.evaluate(tf.compat.v1.initialize_global_variables(...))
. We can just do.numpy()
to get results. This has the benefit to make testing much easier and bring new contributors who might be put off by this testing which looks like coding in tensorflow 1.x.tf.function
in eager mode. So it's not actually helping us at all. I think it's just a TF 1.x relict and is useful only to force graph mode when tf.function is not present. See [WIP] Should break in eager mode. #1289So how to do that, in a clean way? Well that's more or less difficult.
From the discussion in #13 , all public functions should have the
@tf.functions
decorator. #807 shows that it's a problem when using python variables, but in this case, we should just use theinput_signature
to convert automatically scalars to tensorflow tensors. See https://www.tensorflow.org/api_docs/python/tf/function . We can even add tests to ensure we don't draw the graph multiple times with the methodget_concrete_function
.We already use tf.function many time in the codebase, and even when we don't use it, keras enforce it with black magic when subclassing
Metric
or evenLayer
.Let's then assume that all public functions have the
tf.function
decorator.Here is my proposal. It uses pytest features and it won't work when subclassing tf.test.TestCase. But I believe we don't need subclassing anyway for most of the tests present in tf.addons. Worst case scenario we can code it again for
tf.test.TestCase
with the absl parametrize decorator.conftest.py
must be present when running the tests which means that we need to make bazel aware of it. This is why I linked it totensorflow_addons/utils
which is present all the time in the tests.Some reference:
https://docs.pytest.org/en/latest/fixture.html