-
Notifications
You must be signed in to change notification settings - Fork 699
Migrate away from CustomFuseGraph #3403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
torch_glow/src/FusingOptimizer.cpp
Outdated
// 1) Both are in-place ops | ||
// 2) Consumer is in-place, producer !hasInputWriters | ||
// 3) Producer is in-place, consumer !hasOutputWriters | ||
REQ(aliasDb.moveAfterTopologicallyValid(consumer, producer)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it looks like this actually moves the consumer after producer, is there a reason we want to do this and not just see if it's possible with couldMoveAfterTopologically
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guess there is not
a4a29e4
to
3db0698
Compare
migrating to a seperate file now... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zrphercule has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
torch_glow/tests/utils.py
Outdated
accept_all_ops = False | ||
if (expected_fused_ops == None): | ||
expected_fused_ops = [] | ||
accept_all_ops = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is a good idea. The point of expected_fused_ops is to check that the things that should be getting fused are getting fused. Why do we need to have this wildcard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is because we want to use jitVsGlow not only here in the unit test, but also in, like, testing xray (actually I did have a script testing xray model locally using jiVsGlow). we cannot indicate all ops in a big model like xray easily.
My thinking is, this functionality is like an extra. If we want to check it, then good we will have it to be checked. Else we just dont check it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok well I guess my two thoughts are that
- I'd like to make sure all the operator tests use this so having operator checking be on by default (as opposed to this where it's basically off by default) would be preferred (maybe we can pass an extra bool flag to disable this?)
- Shouldn't we be checking the ops in the xray model anyways? We want to make sure we're running what we expect to be running.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you for both two opinions. So our decision is:
- A seperate by_default=False param to control if all ops should be accepted.
- Once we have an official bigger model unit tests in our code base, we should also have a list of expected ops ( of course future operator unit tests should have this as well)
Any comments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds good to me
torch_glow/src/GlowFuser.h
Outdated
@@ -18,10 +18,17 @@ | |||
#define GLOW_TORCH_GLOW_SRC_FUSINGOPTIMIZER_H |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please change this too since the file was moved
Looking great @zrphercule! Just a couple of comments |
ac29220
to
80e2a0a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
80e2a0a
to
3e3658d
Compare
3e3658d
to
8896fa8
Compare
8896fa8
to
982e8db
Compare
982e8db
to
9329d1f
Compare
Summary: This is basiclly the glow version of pytorch/tvm#72 Will not use PyTorch's customFuseNode anymore. Will add comment indicate the copied code and fix the lint once finished. Please dont give detailed review until WIP is removed, but feel free to leave any big-scope opinion. Pull Request resolved: pytorch#3403 Differential Revision: D16775646 fbshipit-source-id: 90873346feff60876602473b303a7883a1370b26
9329d1f
to
87f8b32
Compare
@zrphercule merged this pull request in 3787ca3. |
This is basiclly the glow version of pytorch/tvm#72
Will not use PyTorch's customFuseNode anymore.
Will add comment indicate the copied code and fix the lint once finished.
Please dont give detailed review until WIP is removed, but feel free to leave any big-scope opinion.