Skip to content

Conversation

ProGamerGov
Copy link
Contributor

@ProGamerGov ProGamerGov commented Nov 30, 2020

Just a quick update that fixes a few bugs, adds SharedImage (lowres_tensor from Lucid / Lucent), two new transforms for normalization, and weight visualization tools.

The two new transforms are extremely simple and could be done in other ways like using torchvision's lambda transform, but having them readily accessible should make it easier for Captum's users. In the future we may be able to expand the functionality of these two transforms. The two new transforms are based on discussions from the project doc regarding how to support models with different normalization requirements.

  • Added SharedImage parameterization that's based off of Lucid / Lucent's lowres_tensor. I've included a citation for the paper / article that it comes from. I've tested SharedImage pretty extensively and it works correctly.

  • Added ScaleInputRange transform for use with models that have an input range other than 0,1.

  • Added RGBToBGR transform for converting RGB inputs to BGR.

  • Added tests for SharedImage, ScaleInputRange, RGBToBGR.

  • Some fixes to alpha channel support and additional asserts.

  • Fix NumPy FFTImage so that it outputs almost the exact the same values as the PyTorch version.

  • Added preliminary tools for circuit research. The code is based off of a research article that is set to be released in the new few days.

  • Added n_channels_to_rgb function that's based collapse_channels from Lucid / Lucent.

  • Add tool for weight channel dimensionality reduction.

  • Added tutorial for weight visualization. More complex tutorial steps will be implemented in a future PR after optim-wip: Fix objectives.py, images.py, RedirectedReLU etc. #552 is merged, because optim-wip: Fix objectives.py, images.py, RedirectedReLU etc. #552 is waiting on this PR.

  • Added tutorial for creating custom transforms, loss functions, and input parameterizations.

  • Fixed some mistakes in the InceptionV1 model class.

  • The use of the tqdm library is now optional.

The usage of SharedImage does seem a bit awkward right now, but it's written in such a way that we can easily change it in the future.

image = optimviz.images.NaturalImage(...)
shapes = # a tuple or tuple of tuples of tensor shapes with a length of 2-4 (NCHW), (CHW), (HW)
image.parameterization = optimviz.images.SharedImage(shapes=shapes, parameterization=image.parameterization)

* Added SharedImage parameterization that's based off of Lucid / Lucent's lowres_tensor.

* Added ScaleInputRange transform for use with models that have an input range other than 0,1.

* Added RGBToBGR transform for converting RGB inputs to BGR.

* Some fixes to transforms.
@ProGamerGov ProGamerGov changed the title Optim wip - Alpha fix, Optim wip - Alpha fix & SharedImage Nov 30, 2020
@ProGamerGov ProGamerGov changed the title Optim wip - Alpha fix & SharedImage Optim wip - Alpha channel support fixes & SharedImage Nov 30, 2020
@ProGamerGov
Copy link
Contributor Author

ProGamerGov commented Nov 30, 2020

@NarineK This is a relatively small update, and it's ready for review / merging as I'm not adding or changing anything else for this PR.

@ProGamerGov
Copy link
Contributor Author

ProGamerGov commented Dec 11, 2020

@NarineK I know I said that I wasn't adding anything else to this PR, but I finished setting up / adding the channel reducer and expanded weights code sooner than expected! I don't plan to add anything else now that I've added those.

@NarineK
Copy link
Contributor

NarineK commented Dec 22, 2020

@NarineK I've updated the tutorial and function descriptions based on your comments, but I haven't added a note about the torchvision version yet. I also moved tensor_heatmap from the weight visualization notebook to circuits.py and wrote tests for it, because I think that it'll be more useful that way.

Sounds good! Thank you, @ProGamerGov!
I was looking into structure of _utils package and thought that we can move image related things into a separate package.

I was looking into the codebase and it looks like there is another images.py in the _utils which contains only one function. I was wondering if it might make sense to create image folder under _utils and have _utils/image/dataset.py, _utils/image/common.py and _utils/image/reducer.py (It looks like reducer contains nchannels_to_rgb code that is image specific we can move it into image and generalize later.)
The content of _utils/images.py can be moved to: _utils/image/common.py

tensor_heatmap function is also image specific, I'd move it to _utils/image/common.py.

We might also consider moving circuits.py under _utils/image/ because it has image flavor too.

@ProGamerGov
Copy link
Contributor Author

ProGamerGov commented Dec 22, 2020

@NarineK ChannelReducer should work with any type of tensor input I think, so we can leave it outside of _utils/image, but I'll move circuits.py into _utils/image until we are able to test it with non image models.

Copy link
Contributor

@NarineK NarineK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for working on this PR, @ProGamerGov !
Posting comments about Channel Reducer ...



def nchannels_to_rgb(x: torch.Tensor, warp: bool = True) -> torch.Tensor:
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this also be categorized as a transformation ? We could potentially represent as an image transformation class similar to other transformers ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NarineK Yeah, but it'll be a lot easier to use as a function in some cases. So, maybe I can create a wrapper class for transform.py? Should I leave the function in utils/common.py or should I also move it to transform.py?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see - If we have it as a transform how cumbersome will it be to use ? If we need a wrapper, we can keep the wrapper in the _utils/image/common.py and the transformation piece in the transform.py (if it is possible to represent as a transformation) ?

Copy link
Contributor Author

@ProGamerGov ProGamerGov Dec 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Lucid equivalent of nchannels_to_rgb is built into their show() function but sometimes it's called separately, and it's often used for displaying weights. If we make it class, users will have to use two lines of code instead of one to replicate the behavior. I also think don't Lucid ever uses it as a transform for optimizing model inputs with, but I don't see any issue with making it possible in Captum.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, we can have the wrapper one line function to perform that transformation in the common.py.

Copy link
Contributor Author

@ProGamerGov ProGamerGov Dec 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know PyTorch has some of it's layer classes as wrappers for the functional equivalents, so I think that it might make more sense to have the Class call the function as opposed to the other way around. For collect_activations, the function had to call the class so that it could interface with the functions in output_hook.py properly.

@ProGamerGov
Copy link
Contributor Author

@NarineK So, I actually think it would be better to keep circuits.py outside of _utils/image for the moment. Other than that I've made all the suggested changes for circuits.py and reducer.py.

Copy link
Contributor

@NarineK NarineK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for addressing all comments, @ProGamerGov!
I left couple more nit comments. After we finish those, we can merge.

Before we merge to master later next year we'd need to add more description in the tutorials and more code documentation and testing. But we can do it in the later PRs.

@ProGamerGov
Copy link
Contributor Author

ProGamerGov commented Dec 23, 2020

@NarineK Alright, I've made the final fixes, so we can merge now!!

And I'll create a separate PR in the future for adding documentation for our first alpha version of optim-wip!

@NarineK
Copy link
Contributor

NarineK commented Dec 23, 2020

@NarineK Alright, I've made the final fixes, so we can merge now!!

And I'll create a separate PR in the future for adding documentation for our first alpha version of optim-wip!

Awesome! Thank you very much, @ProGamerGov ! Merging this PR!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants