Skip to content

Conversation

greentfrapp
Copy link

@greentfrapp greentfrapp commented Dec 8, 2020

  • InputOptimization no longer requires target_modules (inferred from loss_function)
  • Consolidate ImageTensor and CudaImageTensor
  • Update RedirectedRELU to only allow "wrong" gradients when grad tensor is zero
  • Make losses composable
  • Move optim tutorials to separate folder under tutorials/optim

- InputOptimization no longer requires target_modules (inferred from loss_function)
- Consolidate ImageTensor and CudaImageTensor
- Deprecate torch.irfft and torch.rfft
Update RedirectedRELU to only allow "wrong" gradients when grad tensor is zero
@facebook-github-bot
Copy link
Contributor

Hi @greentfrapp!

Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@greentfrapp
Copy link
Author

@NarineK this has some changes that overlap with @ProGamerGov's other PRs. You should merge his PRs first and I'll pull the latest version before we proceed to merge this.

@greentfrapp greentfrapp changed the title optim-wip: Fix objectives.py, images.py, RedirectedReLU etc. optim-wip: Fix objectives.py, images.py, RedirectedReLU etc. [do not merge] Dec 9, 2020
@greentfrapp greentfrapp changed the title optim-wip: Fix objectives.py, images.py, RedirectedReLU etc. [do not merge] optim-wip: Fix objectives.py, images.py, RedirectedReLU etc. Dec 9, 2020
@NarineK
Copy link
Contributor

NarineK commented Dec 14, 2020

Thank you @greentfrapp for working on this PR. I can review and we can merge #543 first if there are overlapping changes.

@greentfrapp
Copy link
Author

Yup @NarineK I think you can merge #543 first as well, and I'll update this branch to fix any conflicts after that

@NarineK
Copy link
Contributor

NarineK commented Dec 25, 2020

@greentfrapp, sorry for the delay. We merged #543

@greentfrapp
Copy link
Author

@NarineK no worries! Let me update this branch and resolve the conflicts and we can look into merging this too! A belated Merry Christmas and Happy New Year btw!!!

@NarineK
Copy link
Contributor

NarineK commented Dec 27, 2020

@NarineK no worries! Let me update this branch and resolve the conflicts and we can look into merging this too! A belated Merry Christmas and Happy New Year btw!!!

Thank you, @greentfrapp! Happy New Year and Merry Christmas to you and @ProGamerGov too.

@ProGamerGov
Copy link
Contributor

Thank you @NarineK! Merry Christmas and Happy New Year to you and @greentfrapp as well!

@ProGamerGov
Copy link
Contributor

ProGamerGov commented Apr 16, 2021

@NarineK Are we going to keep the helper functions and classes located in tests/optim/helpers seeing as they just basically replicate the PyTorch code in NumPy? Or do we want to have different tests? I think you mentioned earlier that you didn't want just straight translations of PyTorch functions & classes in NumPy as they could suffer from the same mistakes as the main implementation.

  • In tests/optim/helpers/numpy_common.py the ImageTestDataset class looks like it's useful and should be kept, but the image_cov_np and cov_matrix_to_klt_np basically just copy the same Captum PyTorch functions with NumPy.

  • The tests/optim/helpers/numpy_image.py has a NumPy version FFTImage that seems like it could be useful, but like with the other NumPy translated code it runs the risk of having the same logic errors as the PyTorch version.

  • All the classes and functions in tests/optim/helpers/numpy_transforms.py looks like they are just straight translations of the PyTorch based Captum.optim transforms.

  • The single weights_to_heatmap_2d function inside tests/optim/helpers/numpy_common.py is also a straight copy of the PyTorch version of the function.

@greentfrapp Any thoughts?

@NarineK
Copy link
Contributor

NarineK commented Apr 17, 2021

@NarineK Are we going to keep the helper functions and classes located in tests/optim/helpers seeing as they just basically replicate the PyTorch code in NumPy? Or do we want to have different tests? I think you mentioned earlier that you didn't want just straight translations of PyTorch functions & classes in NumPy as they could suffer from the same mistakes as the main implementation.

  • In tests/optim/helpers/numpy_common.py the ImageTestDataset class looks like it's useful and should be kept, but the image_cov_np and cov_matrix_to_klt_np basically just copy the same Captum PyTorch functions with NumPy.
  • The tests/optim/helpers/numpy_image.py has a NumPy version FFTImage that seems like it could be useful, but like with the other NumPy translated code it runs the risk of having the same logic errors as the PyTorch version.
  • All the classes and functions in tests/optim/helpers/numpy_transforms.py looks like they are just straight translations of the PyTorch based Captum.optim transforms.
  • The single weights_to_heatmap_2d function inside tests/optim/helpers/numpy_common.py is also a straight copy of the PyTorch version of the function.

@greentfrapp Any thoughts?

@NarineK Are we going to keep the helper functions and classes located in tests/optim/helpers seeing as they just basically replicate the PyTorch code in NumPy? Or do we want to have different tests? I think you mentioned earlier that you didn't want just straight translations of PyTorch functions & classes in NumPy as they could suffer from the same mistakes as the main implementation.

  • In tests/optim/helpers/numpy_common.py the ImageTestDataset class looks like it's useful and should be kept, but the image_cov_np and cov_matrix_to_klt_np basically just copy the same Captum PyTorch functions with NumPy.
  • The tests/optim/helpers/numpy_image.py has a NumPy version FFTImage that seems like it could be useful, but like with the other NumPy translated code it runs the risk of having the same logic errors as the PyTorch version.
  • All the classes and functions in tests/optim/helpers/numpy_transforms.py looks like they are just straight translations of the PyTorch based Captum.optim transforms.
  • The single weights_to_heatmap_2d function inside tests/optim/helpers/numpy_common.py is also a straight copy of the PyTorch version of the function.

@greentfrapp Any thoughts?

@ProGamerGov, yes, that's right! Numpy implementations can suffer from the same bugs that PT implementations do. I think that test cases that validate the correctness of those functions with hand calculated expected values would be the best. I assume that calculating by hand could be time consuming. We can do that as much as possible. It would be time consuming to test every detail. I see that you already have other tests apart from numpy tests for those classes.

It's fine to leave those numpy tests there for now since you already wrote them. Do we have other tests for weights_to_heatmap_2d except the ones in numpy_common.py ?

@ProGamerGov
Copy link
Contributor

ProGamerGov commented Apr 17, 2021

@NarineK The NumPy versions of the functions and classes were extremely easy to make as I basically just replaced instances of torch with np, and thus I didn't put a whole lot of effort into creating them (with the exception of the FFTImage one as NumPy has .real and complex arrays). So, I don't really have any issues with removing the ones that aren't very useful!

The FFTImage image NumPy class lets us verify some harder to test behavior with the FFTImage class, so I think it's useful enough to keep. But we could add more tests for specific values, where applicable to it.

Both of the two weights_to_heatmap_2d tests use the NumPy version of the code to validate the results. They also both are testing with a tensor size of (5, 4) (to cover all possible colors), so we could simply just print the result and then paste it into a test for comparison.

@NarineK
Copy link
Contributor

NarineK commented Apr 18, 2021

@NarineK The NumPy versions of the functions and classes were extremely easy to make as I basically just replaced instances of torch with np, and thus I didn't put a whole lot of effort into creating them (with the exception of the FFTImage one as NumPy has .real and complex arrays). So, I don't really have any issues with removing the ones that aren't very useful!

The FFTImage image NumPy class lets us verify some harder to test behavior with the FFTImage class, so I think it's useful enough to keep. But we could add more tests for specific values, where applicable to it.

Both of the two weights_to_heatmap_2d tests use the NumPy version of the code to validate the results. They also both are testing with a tensor size of (5, 4) (to cover all possible colors), so we could simply just print the result and then paste it into a test for comparison.

Sounds good! In that case we can remove those redundant implementations in numpy, @ProGamerGov.
This one looks like testing with two asserts only. Do we need it ?
https://github.com/pytorch/captum/blob/optim-wip/tests/optim/helpers/numpy_common.py#L6

@ProGamerGov
Copy link
Contributor

@NarineK No, I don't think that we need the weights_to_heatmap_2d function in numpy_common.py, as the behavior can be easily tested with exact values.

ProGamerGov referenced this pull request in ProGamerGov/captum Apr 19, 2021
* Changed `optimviz` to `opt`.
* Clarification that we are using ReLU layers and not the previous conv output.
* Working download links for class activation atlas samples!
* Add umap-learn tutorial requirement to `setup.py`.
@ProGamerGov
Copy link
Contributor

@greentfrapp I think the PR is ready to be merged once those final review comments I left have been resolved? Or is there stuff that's still being worked on that hasn't been pushed to future PRs?

Once this PR gets merged, it should be possible to start concurrently developing and merging new PRs to optim-wip branch again!

@greentfrapp
Copy link
Author

@NarineK I think this is more or less ready for merging unless I've missed out something.

Although the test_py36_pip_torch_1_3 test seems to be timing out and I'm not sure why.

@NarineK
Copy link
Contributor

NarineK commented Apr 26, 2021

@NarineK I think this is more or less ready for merging unless I've missed out something.

Although the test_py36_pip_torch_1_3 test seems to be timing out and I'm not sure why.

Thank you very much for working on this PR, @greentfrapp! I'll merge it. It looks like test_py36_pip_torch_1_3 is passing now.
@ProGamerGov thank you very much for the review comments as well.

@NarineK NarineK merged commit 6e3eb60 into meta-pytorch:optim-wip Apr 26, 2021
@ProGamerGov
Copy link
Contributor

@NarineK It looks like this PR broke the Inception5h model download link, and now the tests & notebooks are failing. I made a quick fix PR for it: #656

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants