Skip to content

Prepare THNN/THCUNN for first class scalars. #10023

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

gchanan
Copy link
Contributor

@gchanan gchanan commented Jul 30, 2018

I previous did some transformations, e.g. _nDimension,_dim -> nDimensionLegacyAll, nDimension -> nDimensionLegacyNoScalars.
But this didn't touch dim(), which needs to be updated to support scalars. Instead of doing an (ugly) move, I audited the call sites and updated the cases that could be size 1.

I previous did some transformations, e.g. _nDimension,_dim -> nDimensionLegacyAll, nDimension -> nDimensionLegacyNoScalars.
But this didn't touch dim(), which needs to be updated to support scalars.  Instead of doing an (ugly) move, I audited the call sites and updated the cases that could be size 1.
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@gchanan
Copy link
Contributor Author

gchanan commented Jul 30, 2018

@pytorchbot retest this please.

@@ -4,17 +4,17 @@

static bool THNN_(checkInput)(THCTensor* t)
{
return !t->is_empty() && THTensor_nDimensionLegacyAll(t) == 2 && t->size(1) == 3;
return !t->is_empty() && t->dim() == 2 && t->size(1) == 3;

This comment was marked as off-topic.

This comment was marked as off-topic.

}

static bool THNN_(checkSize1D)(THCTensor* t, int64_t size0)
{
return !t->is_empty() && THTensor_nDimensionLegacyAll(t) == 1 && t->size(0) == size0;
return !t->is_empty() && THTensor_nDimensionLegacyNoScalars(t) == 1 && THTensor_sizeLegacyNoScalars(t, 0) == size0;

This comment was marked as off-topic.

This comment was marked as off-topic.

@gchanan
Copy link
Contributor Author

gchanan commented Jul 31, 2018

@pytorchbot retest this please.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 31, 2018
Summary:
I previous did some transformations, e.g. _nDimension,_dim -> nDimensionLegacyAll, nDimension -> nDimensionLegacyNoScalars.
But this didn't touch dim(), which needs to be updated to support scalars.  Instead of doing an (ugly) move, I audited the call sites and updated the cases that could be size 1.
Pull Request resolved: pytorch/pytorch#10023

Differential Revision: D9068996

Pulled By: gchanan

fbshipit-source-id: c63820767dd1496e908a5a96c34968482193f2c5
gchanan added a commit to gchanan/pytorch that referenced this pull request Jul 31, 2018
Summary:
I previous did some transformations, e.g. _nDimension,_dim -> nDimensionLegacyAll, nDimension -> nDimensionLegacyNoScalars.
But this didn't touch dim(), which needs to be updated to support scalars.  Instead of doing an (ugly) move, I audited the call sites and updated the cases that could be size 1.
Pull Request resolved: pytorch#10023

Differential Revision: D9068996

Pulled By: gchanan

fbshipit-source-id: c63820767dd1496e908a5a96c34968482193f2c5
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
I previous did some transformations, e.g. _nDimension,_dim -> nDimensionLegacyAll, nDimension -> nDimensionLegacyNoScalars.
But this didn't touch dim(), which needs to be updated to support scalars.  Instead of doing an (ugly) move, I audited the call sites and updated the cases that could be size 1.
Pull Request resolved: pytorch#10023

Differential Revision: D9068996

Pulled By: gchanan

fbshipit-source-id: c63820767dd1496e908a5a96c34968482193f2c5
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants