-
Notifications
You must be signed in to change notification settings - Fork 24.2k
Add support for legacy tensor constructors in JIT #74785
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 7569cc3 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
Job | Step | Action |
---|---|---|
Set up job | 🔁 rerun |
This comment was automatically generated by Dr. CI (expand for details).
Please report bugs/suggestions to the (internal) Dr. CI Users group.
Fix for https://github.com/facebookresearch/torchdynamo/issues/93 Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter. [ghstack-poisoned]
Fix for https://github.com/facebookresearch/torchdynamo/issues/93 Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter. [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed offline
for (const auto& name : tensor_names) { | ||
if (obj.ptr() == py::module::import("torch").attr(name.first).ptr()) { | ||
return LegacyTensorConstructor::create( | ||
prim::LegacyTypedConstructor, name.second, at::kCPU); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is my own lack of knowledge showing, but why do we assume that the Tensor will be on the CPU?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should add a comment or just remove the argument. The legacy constructors always are instigated on CPU. There is separately from torch.LongTensor (cpu) torch.cuda.LongTensor (cuda). I added the device argument without actually adding support for torch.cuda.LongTensor so I can just remove it for now
Fix for https://github.com/facebookresearch/torchdynamo/issues/93 Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter. [ghstack-poisoned]
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Fix for https://github.com/facebookresearch/torchdynamo/issues/93 Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter. Differential Revision: [D35362762](https://our.internmc.facebook.com/intern/diff/D35362762) [ghstack-poisoned]
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: Pull Request resolved: #74785 Fix for https://github.com/facebookresearch/torchdynamo/issues/93 Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter. Test Plan: Imported from OSS Reviewed By: ejguan Differential Revision: D35362762 Pulled By: eellison fbshipit-source-id: 960badf08ba2ab0818af5fd331aff3542051250f
Hey @eellison. |
Stack from ghstack:
Fix for https://github.com/facebookresearch/torchdynamo/issues/93
Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter.
Differential Revision: D35362762