Skip to content

add a Float16UniformFill #11123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

add a Float16UniformFill #11123

wants to merge 1 commit into from

Conversation

hyuen
Copy link
Contributor

@hyuen hyuen commented Aug 31, 2018

Summary:
this adds an operator that fills a tensor with a uniform(min, max)
the implementation is to use the fp32 generator and convert to fp16

if performance becomes an issue we could resort to intrinsics

Differential Revision: D9598142

@zou3519 zou3519 added the caffe2 label Aug 31, 2018
@hyuen
Copy link
Contributor Author

hyuen commented Sep 1, 2018

@pytorchbot retest this please

Summary:
Pull Request resolved: #11123

this adds an operator that fills a tensor with a uniform(min, max)
the implementation is to use the fp32 generator and convert to fp16

if performance becomes an issue we could resort to intrinsics

Reviewed By: jspark1105, chocjy

Differential Revision: D9598142

fbshipit-source-id: 08fbcdea424e923928fd1a543efdba8ea68a22b8
@hyuen
Copy link
Contributor Author

hyuen commented Sep 4, 2018

@pytorchbot retest please

@hyuen
Copy link
Contributor Author

hyuen commented Sep 4, 2018

@pytorchbot retest this please

petrex added a commit to petrex/pytorch that referenced this pull request Sep 5, 2018
resolve conflict in data parallel model
* master: (201 commits)
  Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744)
  Move collapse dims into a single place (pytorch#11272)
  Fix some more warnings (pytorch#11257)
  Fix the batchnorm onnx exporting when affine=False
  Improve error message to include return types too (pytorch#11245)
  Check doxygen output in travis (pytorch#11124)
  Accept more numpy scalars as doubles (pytorch#9659)
  Fixed log message (pytorch#10874)
  Fix to distribution.__repr__ with lazy attributes (pytorch#11263)
  Add import export step to end to end tests
  Add complex hooks for out of tree complex implementation. (pytorch#11216)
  Unify opt flag for cmake codegen (pytorch#11227)
  nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127)
  Port PackedSequences functions to C++ (pytorch#11224)
  Treat numerical differences as warnings instead of errors when tracing (pytorch#11246)
  add a Float16UniformFill (pytorch#11123)
  Implement torch.tensordot (pytorch#10025)
  keep net type info when generating model complete net (pytorch#11032)
  Get rid of some uses of type() (pytorch#11215)
  Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205)
  ...
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
Summary:
Pull Request resolved: pytorch#11123

this adds an operator that fills a tensor with a uniform(min, max)
the implementation is to use the fp32 generator and convert to fp16

if performance becomes an issue we could resort to intrinsics

Reviewed By: jspark1105, chocjy

Differential Revision: D9598142

fbshipit-source-id: 5aeab99acf7c3596fa6c33611d9d2c484f7c1145
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants