Skip to content

Inversion of prototype transforms #6062

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
pmeier opened this issue May 22, 2022 · 3 comments
Open

Inversion of prototype transforms #6062

pmeier opened this issue May 22, 2022 · 3 comments

Comments

@pmeier
Copy link
Collaborator

pmeier commented May 22, 2022

When debugging vision models, it is often useful to be able to map predicted bounding boxes, segmentation masks, or keypoints back onto the original image. To do this conveniently, each transformation should know how to invert itself. A discussion about this can be found in this thread. While useful, it was deemed a lower priority than adding general support for non-image input types in the prototype transforms. However, from the preliminary discussions, inverting transformations seems not to conflict with the proposal and thus can be added later.

Apart from the thread linked above there were some discussions without written notes. They are listed here so they don’t get lost:

  • While some transformations can be statically inverted, transformations with random elements can only be inverted for a specific sampled parameter set. In the thread linked above, this parameter set would need to be returned by the forward transformation and used for the inverse. Instead of passing the parameter set around, the transformation could also save it from the last call and use that for inversion.
  • Some transformations are only pseudo-invertible. For example, while cropping is the true inverse of padding, the same is not true the other way around. By cropping first, information is eliminated that cannot be revived by padding. Thus, padding is just the pseudo-inverse of cropping. The inversion functionality should have a strict flag that, if set, disallows pseudo-inverses.

cc @vfdev-5 @datumbox @bjuncek @pmeier

@vfdev-5
Copy link
Collaborator

vfdev-5 commented May 22, 2022

Following the linked thread and Yuxin's comment, transform inversion makes a lot of sense for test-time augmentations (TTA) where we want to reduce prediction variance by combining multiple predictions produced by a single model on transformed input data:

output0 = model(input)
output1 = transform1.invert(model(transform1(input, params1)), params1)
output2 = transform2.invert(model(transform2(input, params2)), params2)
...
final_output = aggregate(output0, output1, output2, ...)

If we want to have a better effect of TTA, we may not want to use non-invertible transforms (like crop) as we wont be able to restore predictions in original space. IMO, we can at first provide inversion feature for invertible ops only.

@JLrumberger
Copy link
Contributor

I have invertible transformations up and running for the following transformations: mirror, translate, zoom, scale, rotate, shear and elastic transformations and I'd be happy to contribute if you want :)

@datumbox
Copy link
Contributor

datumbox commented Jun 24, 2022

@JLrumberger thanks, this is very interesting! We definitely want to consider this after finalizing the main API of the transforms. We want to avoid making it more complex right now, but if you are happy to wait we can kick this off once the prototype is complete. What you think? In the meantime, other contributions from you are very welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants