diff --git a/docs/source/transforms.rst b/docs/source/transforms.rst index fbda5932735..ed7436fa0d1 100644 --- a/docs/source/transforms.rst +++ b/docs/source/transforms.rst @@ -16,7 +16,7 @@ random transformations applied on the batch of Tensor Images identically transfo Scriptable transforms -^^^^^^^^^^^^^^^^^^^^^ +--------------------- In order to script the transformations, please use ``torch.nn.Sequential`` instead of :class:`Compose`. @@ -33,6 +33,8 @@ Make sure to use only scriptable transformations, i.e. that work with ``torch.Te For any custom transformations to be used with ``torch.jit.script``, they should be derived from ``torch.nn.Module``. +Compositions of transforms +-------------------------- .. autoclass:: Compose diff --git a/torchvision/transforms/transforms.py b/torchvision/transforms/transforms.py index 2a585f98c3f..3bdb108a3b5 100644 --- a/torchvision/transforms/transforms.py +++ b/torchvision/transforms/transforms.py @@ -161,11 +161,11 @@ class ToPILImage: Args: mode (`PIL.Image mode`_): color space and pixel depth of input data (optional). If ``mode`` is ``None`` (default) there are some assumptions made about the input data: - - If the input has 4 channels, the ``mode`` is assumed to be ``RGBA``. - - If the input has 3 channels, the ``mode`` is assumed to be ``RGB``. - - If the input has 2 channels, the ``mode`` is assumed to be ``LA``. - - If the input has 1 channel, the ``mode`` is determined by the data type (i.e ``int``, ``float``, - ``short``). + - If the input has 4 channels, the ``mode`` is assumed to be ``RGBA``. + - If the input has 3 channels, the ``mode`` is assumed to be ``RGB``. + - If the input has 2 channels, the ``mode`` is assumed to be ``LA``. + - If the input has 1 channel, the ``mode`` is determined by the data type (i.e ``int``, ``float``, + ``short``). .. _PIL.Image mode: https://pillow.readthedocs.io/en/latest/handbook/concepts.html#concept-modes """