-
Notifications
You must be signed in to change notification settings - Fork 543
optim-wip: Fix objectives.py, images.py, RedirectedReLU etc. #552
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optim-wip: Fix objectives.py, images.py, RedirectedReLU etc. #552
Conversation
- InputOptimization no longer requires target_modules (inferred from loss_function) - Consolidate ImageTensor and CudaImageTensor - Deprecate torch.irfft and torch.rfft
Update RedirectedRELU to only allow "wrong" gradients when grad tensor is zero
Hi @greentfrapp! Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file. In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
@NarineK this has some changes that overlap with @ProGamerGov's other PRs. You should merge his PRs first and I'll pull the latest version before we proceed to merge this. |
Thank you @greentfrapp for working on this PR. I can review and we can merge #543 first if there are overlapping changes. |
@greentfrapp, sorry for the delay. We merged #543 |
@NarineK no worries! Let me update this branch and resolve the conflicts and we can look into merging this too! A belated Merry Christmas and Happy New Year btw!!! |
Thank you, @greentfrapp! Happy New Year and Merry Christmas to you and @ProGamerGov too. |
Thank you @NarineK! Merry Christmas and Happy New Year to you and @greentfrapp as well! |
@NarineK Are we going to keep the helper functions and classes located in
@greentfrapp Any thoughts? |
@ProGamerGov, yes, that's right! Numpy implementations can suffer from the same bugs that PT implementations do. I think that test cases that validate the correctness of those functions with hand calculated expected values would be the best. I assume that calculating by hand could be time consuming. We can do that as much as possible. It would be time consuming to test every detail. I see that you already have other tests apart from numpy tests for those classes. It's fine to leave those numpy tests there for now since you already wrote them. Do we have other tests for |
@NarineK The NumPy versions of the functions and classes were extremely easy to make as I basically just replaced instances of The Both of the two |
Sounds good! In that case we can remove those redundant implementations in numpy, @ProGamerGov. |
@NarineK No, I don't think that we need the |
* Changed `optimviz` to `opt`. * Clarification that we are using ReLU layers and not the previous conv output. * Working download links for class activation atlas samples! * Add umap-learn tutorial requirement to `setup.py`.
@greentfrapp I think the PR is ready to be merged once those final review comments I left have been resolved? Or is there stuff that's still being worked on that hasn't been pushed to future PRs? Once this PR gets merged, it should be possible to start concurrently developing and merging new PRs to |
@NarineK I think this is more or less ready for merging unless I've missed out something. Although the |
Thank you very much for working on this PR, @greentfrapp! I'll merge it. It looks like |
InputOptimization
no longer requires target_modules (inferred fromloss_function
)ImageTensor
andCudaImageTensor
RedirectedRELU
to only allow "wrong" gradients when grad tensor is zerotutorials/optim