-
Notifications
You must be signed in to change notification settings - Fork 104
[batch-rule] householder_product #322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[batch-rule] householder_product #322
Conversation
@@ -160,6 +178,7 @@ TORCH_LIBRARY_IMPL(aten, FT_BATCHED_KEY, m) { | |||
VMAP_SUPPORT("mv", mv_batch_rule); | |||
VMAP_SUPPORT("mm", mm_batch_rule); | |||
m.impl("linear", linear_decomp); | |||
m.impl("orgqr", orgqr_decomp); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without this we get
RuntimeError: aten::orgqr hit the vmap fallback which is currently disabled
But orgqr
is a composite operator 🤔.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, you can add it into BatchRulesDecompositions.cpp
@@ -151,6 +151,20 @@ Tensor linear_decomp( | |||
return result; | |||
} | |||
|
|||
std::tuple<Tensor, c10::optional<int64_t>> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use EXISTING_BDIM
or EXISTING_BDIM_ALL_BOXED
for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately neither of them works.
EXISTING_BDIM
seems to assume that onlyself
has bdim- With
EXISTING_BDIM_ALL_BOXED
we gettorch.linalg.householder_product: input.shape[-1] must be greater than or equal to tau.shape[-1]
@Chillee @kshitij12345 can we merge this? |
@zou3519 I'm fine with merging it. |
Original code written by @kshitij12345 in #322, this PR is just a rebase onto main
Original code written by @kshitij12345 in #322, this PR is just a rebase onto main
Merged in #972 |
Original code written by @kshitij12345 in pytorch/functorch#322, this PR is just a rebase onto main
Original code written by @kshitij12345 in pytorch/functorch#322, this PR is just a rebase onto main
Reference: #240