Skip to content

refact with Generator #905

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
May 9, 2020
Merged

Conversation

fsx950223
Copy link
Member

@fsx950223 fsx950223 commented Jan 17, 2020

1.Remove "--benchmarks=all" will skip the benchmark test. Previous behavior will skip all test cases.
2.Refact rrelu with Generator which is introduced in Random number generation
3.Add graph test support

@googlebot
Copy link

All (the pull request submitter and all commit authors) CLAs are signed, but one or more commits were authored or co-authored by someone other than the pull request submitter.

We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that by leaving a comment that contains only @googlebot I consent. in this pull request.

Note to project maintainer: There may be cases where the author cannot leave a comment, or the comment is not properly detected as consent. In those cases, you can manually confirm consent of the commit author(s), and set the cla label to yes (if enabled on your project).

ℹ️ Googlers: Go here for more info.

@gabrieldemarmiesse
Copy link
Member

@googlebot I consent

@googlebot
Copy link

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@gabrieldemarmiesse
Copy link
Member

I've merged master into your branch to update it and fixed any formatting/conflicts it might have. If you need to do some more modifications, please do git pull beforehand.

Copy link
Member

@gabrieldemarmiesse gabrieldemarmiesse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the pull request, I'm in favor of such a change, being able to use random number generator is nice, but we may have to wait until this is not experimental anymore.

@@ -26,6 +31,7 @@ def rrelu(
upper: Number = 0.3333333333333333,
training: Optional[bool] = None,
seed: Optional[int] = None,
gs: Optional[tf.random.experimental.Generator] = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a policy about the use of experimental features in TensorFlow? What are the conditions in which we should accept the usage?

For example, I'd be ok if we use experimental features in tests, but not really in user-facing code. That's because it's ok if the build breaks because of an experimental feature, but it's not ok if the code the users are using is breaking.

@fsx950223 Maybe we can wait until the Generator is not experimental anymore?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I will wait until stable API is released

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stable API is available on TF2.2 which is what master builds against. Please merge master into this PR and should be good to update

@gabrieldemarmiesse gabrieldemarmiesse self-assigned this Mar 6, 2020
@gabrieldemarmiesse gabrieldemarmiesse added the blocked Pending something elses completion label Mar 6, 2020
@seanpmorgan seanpmorgan removed the blocked Pending something elses completion label Apr 26, 2020
Copy link
Member

@gabrieldemarmiesse gabrieldemarmiesse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the pull request and thank you for waiting :)

Comment on lines 30 to 32
training: bool = None,
seed: Number = None,
gs: tf.random.Generator = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use Optional here. Having None as the default value doesn't make the type optional automatically. I think it should but it doesn't :(

from tensorflow_addons.utils.types import TensorLike, Number

try:
tf.no_gradient("StatefulUniform")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this deserves a comment as it's not obvious at all what is the purpose of this line.

Copy link
Member Author

@fsx950223 fsx950223 May 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'm curiously the op doesn't register the gradient.
Any better solution?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand. You mean the tests will fail if this line is not used?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I test it with latest tf version and it seems the op has registered the gradient and I remove the tf.no_gradient.

Copy link
Member

@gabrieldemarmiesse gabrieldemarmiesse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes, we're nearly there. We need to make sure we don't surprise the user.

lower: Number = 0.125,
upper: Number = 0.3333333333333333,
training: Optional[bool] = None,
seed: Optional[int] = None,
gs: tf.random.Generator = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
gs: tf.random.Generator = None,
gs: Optional[tf.random.Generator] = None,

lower = tf.cast(lower, x.dtype)
upper = tf.cast(upper, x.dtype)
if gs is None:
gs = tf.random.get_global_generator()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we affect the global context, unlike before were the random global context wasn't affected. This is a breaking change potentially, especially if it's used with reset_from_seed. Also from the documentation :

This function will create the global generator the first time it is called, and the generator will be placed at the default device at that time, so one needs to be careful when this function is first called. Using a generator placed on a less-ideal device will incur performance regression.

So I believe that the behavior the rrelu should have when gs is None is the old one. We call tf.random.uniform and set the seed.

If the user provides his/her generator, then we use that (gs.uniform(...))

If the user provides a seed and a generator, we throw a ValueError because it makes no sense, the user is likely mistaken, with a nice error message of course.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the user provides a seed and a generator, we throw a ValueError because it makes no sense, the user is likely mistaken, with a nice error message of course.

I remove gs or seed. But it casues unstable result when I test the code.

@fsx950223
Copy link
Member Author

cc @gabrieldemarmiesse

Copy link
Member

@gabrieldemarmiesse gabrieldemarmiesse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the changes, it's really nice. Just a few minor changes and we can merge.

@@ -44,19 +44,28 @@ def rrelu(
training: `bool`, indicating whether the `call`
is meant for training or inference.
seed: `int`, this sets the operation-level seed.
gs: A `Generator`. Default value is tf.random.get_global_generator().
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should change this line, it's not accurate anymore.

Also we should change the argument name to random_number_generator or rng whatever you prefer. It's more common to use this name.

Comment on lines 66 to 67
raise ValueError("Either seed or gs should be specific")
alpha = gs.uniform(tf.shape(x), minval=lower, maxval=upper, dtype=x.dtype)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
raise ValueError("Either seed or gs should be specific")
alpha = gs.uniform(tf.shape(x), minval=lower, maxval=upper, dtype=x.dtype)
raise ValueError(
"Either seed or gs should be specified. Not both at the same time."
)
alpha = gs.uniform(tf.shape(x), minval=lower, maxval=upper, dtype=x.dtype)

Comment on lines 64 to 65
else:
if seed is not None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In term of control flow, I believe something flatter would be easier to read. Something like:

if gs is not None and seed is not None:
    raise ValueError(...)
elif gs is not None:
    gs.uniform(...)
else:
    tf.random.uniform(..., seed=seed)

@@ -23,43 +23,37 @@
SEED = 111111


def rrelu_wrapper(lower, upper, training):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function seems unused.

Comment on lines 40 to 46
training_results = {
np.float16: [-0.3826, -0.165, 0, 1, 2],
np.float32: [-0.282151192, -0.199812651, 0, 1, 2],
np.float64: [-0.25720977, -0.1221586, 0, 1, 2],
}
gs = tf.random.Generator.from_seed(SEED)
result = rrelu(x, lower, upper, training=training, seed=None, gs=gs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't modify an existing test when testing a new option. It should be a new standalone test.

In the new test, rather than hardcoding the results, we could check that the generator works as expected, meaning that it doesn't affect the global random state, but is still reproductible. An example would be (pseudo code):

def test1():
    """gs should ensure reproducibility without affecting the global random state."""
    tf.set_seed(0)
    a = tf.random.uniform()
    gs = tf.random.Generator.from_seed(8448)
    rrelu_result = rrelu(..., gs=gs)

    tf.set_seed(0)
    gs = tf.random.Generator.from_seed(8448)
    assert rrelu(..., gs=gs) == rrelu_result
    assert a == tf.random.uniform()

What do you think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I revert old test cases.

Copy link
Member

@gabrieldemarmiesse gabrieldemarmiesse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the pull request! That should help users who don't want to affect the global random state.

@gabrieldemarmiesse gabrieldemarmiesse merged commit e244936 into tensorflow:master May 9, 2020
ashutosh1919 pushed a commit to ashutosh1919/addons that referenced this pull request Jul 12, 2020
* refact with Generator

* merge conflicts

* fix test cases

* fix license

* add type

* add return dtype

* add optional and remove gradient

* change gs is None behavior

* fix tests

* add optional

* revert old test cases

Co-authored-by: gabrieldemarmiesse <[email protected]>
jrruijli pushed a commit to jrruijli/addons that referenced this pull request Dec 23, 2020
* refact with Generator

* merge conflicts

* fix test cases

* fix license

* add type

* add return dtype

* add optional and remove gradient

* change gs is None behavior

* fix tests

* add optional

* revert old test cases

Co-authored-by: gabrieldemarmiesse <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants