Skip to content

Various doc enhancements #7326

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Feb 24, 2023
Merged

Various doc enhancements #7326

merged 8 commits into from
Feb 24, 2023

Conversation

NicolasHug
Copy link
Member

No description provided.

@@ -105,7 +105,9 @@ def __repr__(self) -> str:


class ToTensor:
"""Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor. This transform does not support torchscript.
"""Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor and scale the values accordingly.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that in a bunch of these I had to remove the full dots on those first sentences, because otherwise sphinx would stop after the first . when rendering the "preview" which would truncate relevant info.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We always mention "PIL Image" without ticks and numpy ndarray as "ndarray". It would worth to keep things similar everywhere ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe, but honestly I'd rather do that kind of changes later. The use of ticks doesn't come from this PR

@@ -9,7 +9,7 @@


class ConvertBoundingBoxFormat(Transform):
"""[BETA] Convert bounding box coordinates to the given ``format``, e.g. from "CXCYWH" to "XYXY".
"""[BETA] Convert bounding box coordinates to the given ``format``, eg from "CXCYWH" to "XYXY".
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

e.g. here, had to remove full dot


.. betastatus:: ToDtype transform

Args:
dtype (dtype or dict of Datapoint -> dtype): The dtype to convert to. A dict can be passed to specify
dtype (``torch.dtype`` or dict of ``Datapoint`` -> ``torch.dtype``): The dtype to convert to.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrapping in double quotes seems to fix the sad formatting you noticed @pmeier . We should fix those on the rest of the docs

Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an overarching comment: Should we invert the logic and warn users on the transforms that do scale the values? This is outlier behavior and AFAIK only exhibited by ToTensor, which is why we deprecated it. Granted, ToImagePIL also does, but this out of necessity, since PIL does not support RGB float images.

@@ -34,6 +34,7 @@
sys.path.append(os.path.abspath("."))

torchvision.disable_beta_transforms_warning()
import torchvision.datapoints # Don't remove, otherwise the docs for datapoints aren't linked properly
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain what is happening?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

honestly I have no idea.

@@ -2,6 +2,12 @@ Datapoints
==========

.. currentmodule:: torchvision.datapoints

Datapoints are tensor subclasses which the v2 transforms use under the hood to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Datapoints are tensor subclasses which the v2 transforms use under the hood to
Datapoints are tensor subclasses which :mod:`~torchvision.transforms.v2` uses under the hood to

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is what we want, this just renders as v2 and isn't linked to anything

image

raise ValueError(
f"Length of p doesn't match the number of transforms: " f"{len(p)} != {len(transforms)}"
)
raise ValueError(f"Length of p doesn't match the number of transforms: " f"{len(p)} != {len(transforms)}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is coming from a lint failure on main.


.. betastatus:: ToDtype transform

Args:
dtype (dtype or dict of Datapoint -> dtype): The dtype to convert to. A dict can be passed to specify
dtype (``torch.dtype`` or dict of ``Datapoint`` -> ``torch.dtype``): The dtype to convert to.
A dict can be passed to specify
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line looks oddly short. Shouldn't it be merged with the one below?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK but it doesn't matter at all for the rendering, so let's please not worry too much about such details at this time

@@ -27,7 +27,8 @@ def _transform(self, inpt: PIL.Image.Image, params: Dict[str, Any]) -> torch.Ten


class ToImageTensor(Transform):
"""[BETA] Convert a tensor or an ndarray or PIL Image to :class:`~torchvision.datapoints.Image`.
"""[BETA] Convert a tensor, ndarray, or PIL Image to :class:`~torchvision.datapoints.Image`
; this does not scale values.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consistency with all other docstrings.

Suggested change
; this does not scale values.
- this does not scale values.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went for ; because I suspect sphinx to treat the - as a list entry as it's the first character of the line

@NicolasHug
Copy link
Member Author

comments were addressed @pmeier @vfdev-5

Should we invert the logic and warn users on the transforms that do scale the values? This is outlier behavior and AFAIK only exhibited by ToTensor, which is why we deprecated it. Granted, ToImagePIL also does, but this out of necessity, since PIL does not support RGB float images.

Sorry I don't understand. We don't warn about anything right now, I just documented the behaviour of all transforms w.r.t. scaling vs non-scaling. I think we should document that for all of them, not just those that scale.

Copy link
Collaborator

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @NicolasHug !

@NicolasHug NicolasHug merged commit 877ffd9 into pytorch:main Feb 24, 2023
@github-actions
Copy link

Hey @NicolasHug!

You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py

NicolasHug added a commit to NicolasHug/vision that referenced this pull request Feb 24, 2023
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: vfdev <[email protected]>
facebook-github-bot pushed a commit that referenced this pull request Mar 30, 2023
Summary:

Reviewed By: vmoens

Differential Revision: D44416641

fbshipit-source-id: 71db4e469d74407ef0319056a3b76bf4b502cb44

Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: vfdev <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants