Skip to content

Migrate away from CustomFuseGraph #3403

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

zrphercule
Copy link
Contributor

@zrphercule zrphercule commented Aug 8, 2019

This is basiclly the glow version of pytorch/tvm#72
Will not use PyTorch's customFuseNode anymore.

Will add comment indicate the copied code and fix the lint once finished.
Please dont give detailed review until WIP is removed, but feel free to leave any big-scope opinion.

@zrphercule zrphercule requested review from jackm321 and yinghai and removed request for jackm321 August 8, 2019 22:03
// 1) Both are in-place ops
// 2) Consumer is in-place, producer !hasInputWriters
// 3) Producer is in-place, consumer !hasOutputWriters
REQ(aliasDb.moveAfterTopologicallyValid(consumer, producer));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks like this actually moves the consumer after producer, is there a reason we want to do this and not just see if it's possible with couldMoveAfterTopologically

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guess there is not

@zrphercule zrphercule changed the title [WIP] Migrate away from CustomFuseGraph Migrate away from CustomFuseGraph Aug 12, 2019
@zrphercule
Copy link
Contributor Author

migrating to a seperate file now...

Copy link
Contributor

@yinghai yinghai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zrphercule has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

accept_all_ops = False
if (expected_fused_ops == None):
expected_fused_ops = []
accept_all_ops = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is a good idea. The point of expected_fused_ops is to check that the things that should be getting fused are getting fused. Why do we need to have this wildcard?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is because we want to use jitVsGlow not only here in the unit test, but also in, like, testing xray (actually I did have a script testing xray model locally using jiVsGlow). we cannot indicate all ops in a big model like xray easily.
My thinking is, this functionality is like an extra. If we want to check it, then good we will have it to be checked. Else we just dont check it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok well I guess my two thoughts are that

  1. I'd like to make sure all the operator tests use this so having operator checking be on by default (as opposed to this where it's basically off by default) would be preferred (maybe we can pass an extra bool flag to disable this?)
  2. Shouldn't we be checking the ops in the xray model anyways? We want to make sure we're running what we expect to be running.

Copy link
Contributor Author

@zrphercule zrphercule Aug 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you for both two opinions. So our decision is:

  1. A seperate by_default=False param to control if all ops should be accepted.
  2. Once we have an official bigger model unit tests in our code base, we should also have a list of expected ops ( of course future operator unit tests should have this as well)
    Any comments?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds good to me

@@ -18,10 +18,17 @@
#define GLOW_TORCH_GLOW_SRC_FUSINGOPTIMIZER_H
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please change this too since the file was moved

@jackm321
Copy link
Contributor

Looking great @zrphercule! Just a couple of comments

Copy link
Contributor

@jackm321 jackm321 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Summary:
This is basiclly the glow version of pytorch/tvm#72
Will not use PyTorch's customFuseNode anymore.

Will add comment indicate the copied code and fix the lint once finished.
Please dont give detailed review until WIP is removed, but feel free to leave any big-scope opinion.
Pull Request resolved: pytorch#3403

Differential Revision: D16775646

fbshipit-source-id: 90873346feff60876602473b303a7883a1370b26
@facebook-github-bot
Copy link

@zrphercule merged this pull request in 3787ca3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants