tl.alphas
and tl.alphas_like
added following the tf.ones/zeros and tf.zeros_like/ones_like
#580
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In a research project, I encountered the need to create tensors of the same shape than any other tensor with a custom value (not just 0 or 1), could be Boolean, Floats, Integers and so on.
For the record, it was something I originally developed for the TF repository (but they were not interested):
The functions prototypes are the following and have been implemented in tensorlayer/array_ops.py:
The code is not created from scratch. It is highly inspired by the functions tf.ones, tf.zeros for tl.alphas and by tf.zeros_like, tf.ones_like for tl.alphas_like.
The code use the latest implementation and has been designed to work with eager_mode.
The number of modification is relatively small, thus I am relatively confident on the robustness of the new implementation (largely based on the existing one).
The idea is to reproduce and merge the functions while enabling to set any custom value in the tensor:
tl.alphas
merges:tl.alphas_like
merges:How is the API Working ?
My new functions take a parameter alpha_value and fill the tensor with this value. This allows me to run such a script:
Performance Optimization
Of course, it could be to do the same thing using the following commands:
Each run has been executed 5 times and execution time averaged:
The method working with TF Code only (shown above) : [47.5s, 47.5s, 48.1s, 47.6s; 47.8s] => Average Time: 47.7 secs
The method I implemented "alphas_like": [25.0s, 25.0s, 25.0s, 24.9s, 25.0s] => Average Time: 25 secs
My method is almost twice as fast !
Code used to produce this numbers:
Which gives me these results: