Skip to content

Conversation

Gamrix
Copy link
Contributor

@Gamrix Gamrix commented May 16, 2022

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 16, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit cc2d25e (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Gamrix added a commit that referenced this pull request May 16, 2022
@Gamrix Gamrix requested a review from Krovatkin May 16, 2022 17:31
@Gamrix
Copy link
Contributor Author

Gamrix commented May 16, 2022

cc @miladm

Copy link
Contributor

@Krovatkin Krovatkin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add a test case that would test this approach works. Namely,

  1. we can set a flag
  2. trace nonzero
  3. it gives us the correct upper bound
  4. when we move a lazy nonzero to cpu() it gives us the correct result.

}

std::vector<Shape> compute_shape_nonzero(const at::Tensor& self) {
return compute_shape_nonzero(self, false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh god, as_tuple is to emulate Pytorch API right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah

if(op == at::aten::nonzero){
// When symbolic shape mode is not enabled, the nonzero shape function
// returns an incorrect result.
return !symbolicShapeEnabled();
Copy link
Collaborator

@miladm miladm May 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • To better understand this condition: the intended behavior is to not fall back to CPU when symbolic shape are enabled. Correct?

  • If yes to above, are you planning to add other dynamic ops like unique and masked_select to the list? (realizing these ops may be out of scope for this PR)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is correct, and yes, we will add at least unique to the list, I will need to look at masked_select some more.

for (auto dim_size : t.sizes()) {
max_elements *= dim_size;
}
return {Shape(at::kLong, {max_elements, (int64_t)t.sizes().size()})};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please help me understand why we don't do something like this:

return {Shape(at::kLong, std::vector<int64_t>(t.sizes().begin(), t.sizes().end()))};

Copy link
Contributor Author

@Gamrix Gamrix May 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are not the same. torch.nonzero always returns a 2d tensor. Think of it as giving a list of indicies of the nonzero values.

(int64_t)t.sizes().size() is an integer that holds number of dimensions in the original tensor

Comment on lines +111 to +112
for (auto & result_shape : result_shapes) {
result_shape = result_shape.with_symbolic_dims(c10::nullopt);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed a bug where we crash when there isn't a shape function.

@Gamrix Gamrix requested a review from Krovatkin May 17, 2022 00:28
@Gamrix
Copy link
Contributor Author

Gamrix commented May 17, 2022

@Krovatkin Added a test and fixed a bug.

Gamrix added a commit that referenced this pull request May 17, 2022
Copy link
Contributor

@Krovatkin Krovatkin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

Gamrix added a commit that referenced this pull request May 17, 2022
@Gamrix
Copy link
Contributor Author

Gamrix commented May 18, 2022

@pytorchmergebot merge this please

@github-actions
Copy link
Contributor

Hey @Gamrix.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request May 20, 2022
Summary:
Pull Request resolved: #77572

Approved by: https://github.com/Krovatkin

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/73480bcbe09bf06e60431b945c1e049667de68dd

Reviewed By: seemethere

Differential Revision: D36494241

Pulled By: Gamrix

fbshipit-source-id: e761ca47ee6cd963f0bd3e7f6f7b3aa90044784d
@facebook-github-bot facebook-github-bot deleted the gh/gamrix/61/head branch May 22, 2022 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants