-
Notifications
You must be signed in to change notification settings - Fork 543
Preliminary PR for optim transforms and model #500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this PR @ProGamerGov! I left couple minor comments.
In terms of PIL, we might need to put it in try-except and ask users to install it in case it is not installed. I added an example.
@NarineK If there's nothing else that needs to be changed currently, then I think this pull request should be ready for merging? |
Thank you for making the changes @ProGamerGov ! Yes, we can merge this PR. Last time the system didn't allow SK to merge. I think they allow only the owners or employees to merge. If it doesn't allow you to merge it, I can press the button. |
@NarineK Yeah, it looks like only those with write access can merge PRs. So, you'll have to merge it for me. |
* Didn't notice this up until now as it's not currently used.
Merged, thank you @ProGamerGov ! |
**New Features:** * The `vis.py` script now differentiates between direction visualization and DeepDream with the new `-layer_vis` parameter. The new parameter has two options, either `deepdream` or `direction`. The default is `deepdream`, and `direction` will result in the old behavior before this update. This parameter only works when no `-channel` value is specified. **Improvements:** * Improved random scaling based on the affine grid matrices that I learned about for: meta-pytorch/captum#500 * Improvements to tensor normalization. * Center neuron extraction in the `vis.py` script now works for layer targets without specifying channels, though I'm not sure how useful this change will be.
This preliminary PR should make it easier for @greentfrapp to start testing with the Inception V1 model. It also removes some of the unnecessary imports, and implements changes as per our previous discussions.
Changes made:
Removed Kornia imports.
Removed ImageIO import.
Replaced some NumPy code with PyTorch code.
Removed Matplotlib code from image saving function.
Uploaded new model to replace the old one. The new model has RGB to BGR built into it's transform inputs option, so we'll probably have to figure out normalization and RGB to BGR conversions outside of the model in the future.
Added tutorial notebook for the InceptionV1 / GoogleNet / Inception5h model, based on the Torchvision notebook.
Added DeepDream objective.
Added L1 objective. Probably doesn't work correctly as the current objective system always gets the negative mean of the loss function. The most common use case would be using the L1 objective as a penalty.
Added L2 objective.
Added Total Variation Objective.
Added improved Diversity objective. There's now no need for
F.normalize
or flattening the input.Added ability to set learning rate.
The sigmoid function in NaturalImage can now be replaced with other functions for testing (In my own experiments, I sometimes found
torch.clamp
created better results).Fixed bug with Python 3.6, so that the code works properly on Google Colab.
Fixed bug with PyTorch 1.7.0. The fix works for earlier versions of PyTorch as well!