Skip to content

Commit c27c869

Browse files
committed
Merge branch 'main' of github.com:pytorch/vision into ufmt
2 parents 8477cb8 + 903ea4a commit c27c869

File tree

104 files changed

+2864
-519
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

104 files changed

+2864
-519
lines changed

.circleci/config.yml

Lines changed: 9 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

.circleci/config.yml.in

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -200,6 +200,7 @@ jobs:
200200
pip install --user --progress-bar off mypy
201201
pip install --user --progress-bar off types-requests
202202
pip install --user --progress-bar off --pre torch -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
203+
pip install --user --progress-bar off git+https://github.com/pytorch/data.git
203204
pip install --user --progress-bar off --no-build-isolation --editable .
204205
mypy --config-file mypy.ini
205206

@@ -739,7 +740,7 @@ jobs:
739740
executor:
740741
name: windows-gpu
741742
environment:
742-
CUDA_VERSION: "10.2"
743+
CUDA_VERSION: "11.1"
743744
PYTHON_VERSION: << parameters.python_version >>
744745
steps:
745746
- checkout
@@ -763,6 +764,12 @@ jobs:
763764
paths:
764765
- conda
765766
- env
767+
- run:
768+
name: Install CUDA
769+
command: packaging/windows/internal/cuda_install.bat
770+
- run:
771+
name: Update CUDA driver
772+
command: packaging/windows/internal/driver_update.bat
766773
- run:
767774
name: Install torchvision
768775
command: .circleci/unittest/windows/scripts/install.sh
@@ -835,7 +842,7 @@ jobs:
835842
<<: *binary_common
836843
machine:
837844
image: ubuntu-1604-cuda-10.2:202012-01
838-
resource_class: gpu.small
845+
resource_class: gpu.nvidia.small
839846
environment:
840847
PYTHON_VERSION: << parameters.python_version >>
841848
PYTORCH_VERSION: << parameters.pytorch_version >>

.circleci/unittest/linux/scripts/install.sh

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,10 @@ fi
2626
printf "Installing PyTorch with %s\n" "${cudatoolkit}"
2727
conda install -y -c "pytorch-${UPLOAD_CHANNEL}" "pytorch-${UPLOAD_CHANNEL}"::pytorch "${cudatoolkit}" pytest
2828

29+
printf "Installing torchdata from source"
30+
pip install git+https://github.com/pytorch/data.git
31+
32+
2933
if [ $PYTHON_VERSION == "3.6" ]; then
3034
printf "Installing minimal PILLOW version\n"
3135
# Install the minimal PILLOW version. Otherwise, let setup.py install the latest

.circleci/unittest/windows/scripts/install.sh

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ unset PYTORCH_VERSION
55
# so no need to set PYTORCH_VERSION.
66
# In fact, keeping PYTORCH_VERSION forces us to hardcode PyTorch version in config.
77

8-
set -e
8+
set -ex
99

1010
this_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
1111

@@ -26,13 +26,30 @@ else
2626
fi
2727

2828
printf "Installing PyTorch with %s\n" "${cudatoolkit}"
29-
conda install -y -c "pytorch-${UPLOAD_CHANNEL}" "pytorch-${UPLOAD_CHANNEL}"::pytorch "${cudatoolkit}" pytest
29+
# conda-forge channel is required for cudatoolkit 11.1 on Windows, see https://github.com/pytorch/vision/issues/4458
30+
conda install -y -c "pytorch-${UPLOAD_CHANNEL}" -c conda-forge "pytorch-${UPLOAD_CHANNEL}"::pytorch "${cudatoolkit}" pytest
31+
32+
printf "Installing torchdata from source"
33+
pip install git+https://github.com/pytorch/data.git
34+
3035

3136
if [ $PYTHON_VERSION == "3.6" ]; then
3237
printf "Installing minimal PILLOW version\n"
3338
# Install the minimal PILLOW version. Otherwise, let setup.py install the latest
3439
pip install pillow>=5.3.0
3540
fi
3641

42+
torch_cuda=$(python -c "import torch; print(torch.cuda.is_available())")
43+
echo torch.cuda.is_available is $torch_cuda
44+
45+
if [ ! -z "${CUDA_VERSION:-}" ] ; then
46+
if [ "$torch_cuda" == "False" ]; then
47+
echo "torch with cuda installed but torch.cuda.is_available() is False"
48+
exit 1
49+
fi
50+
fi
51+
52+
source "$this_dir/set_cuda_envs.sh"
53+
3754
printf "* Installing torchvision\n"
3855
"$this_dir/vc_env_helper.bat" python setup.py develop

.circleci/unittest/windows/scripts/run_test.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,9 @@ set -e
55
eval "$(./conda/Scripts/conda.exe 'shell.bash' 'hook')"
66
conda activate ./env
77

8+
this_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
9+
source "$this_dir/set_cuda_envs.sh"
10+
811
export PYTORCH_TEST_WITH_SLOW='1'
912
python -m torch.utils.collect_env
1013
pytest --cov=torchvision --junitxml=test-results/junit.xml -v --durations 20 test --ignore=test/test_datasets_download.py
Lines changed: 39 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,48 @@
11
#!/usr/bin/env bash
2+
set -ex
23

3-
if [ "${CU_VERSION:-}" == "cpu" ] ; then
4-
exit 0
5-
fi
4+
echo CU_VERSION is "${CU_VERSION}"
5+
echo CUDA_VERSION is "${CUDA_VERSION}"
66

7-
if [[ ${#CU_VERSION} -eq 5 ]]; then
8-
CUDA_VERSION="${CU_VERSION:2:2}.${CU_VERSION:4:1}"
7+
# Currenly, CU_VERSION and CUDA_VERSION are not consistent.
8+
# to understand this code, see https://github.com/pytorch/vision/issues/4443
9+
version="cpu"
10+
if [[ ! -z "${CUDA_VERSION}" ]] ; then
11+
version="$CUDA_VERSION"
12+
else
13+
if [[ ${#CU_VERSION} -eq 5 ]]; then
14+
version="${CU_VERSION:2:2}.${CU_VERSION:4:1}"
15+
fi
916
fi
1017

11-
# It's a log to see if CU_VERSION exists, if not, we use environment CUDA_VERSION directly
12-
# in unittest_windows_gpu, there's no CU_VERSION, but CUDA_VERSION.
13-
echo "Using CUDA $CUDA_VERSION, CU_VERSION is $CU_VERSION now"
18+
# Don't use if [[ "$version" == "cpu" ]]; then exit 0 fi.
19+
# It would exit the shell. One result is cpu tests would not run if the shell exit.
20+
# Unless there's an error, Don't exit.
21+
if [[ "$version" != "cpu" ]]; then
22+
# set cuda envs
23+
export PATH="/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v${version}/bin:/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v${version}/libnvvp:$PATH"
24+
export CUDA_PATH_V${version/./_}="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v${version}"
25+
export CUDA_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v${version}"
1426

15-
version=$CUDA_VERSION
27+
if [ ! -d "$CUDA_PATH" ]; then
28+
echo "$CUDA_PATH" does not exist
29+
exit 1
30+
fi
1631

17-
# set cuda envs
18-
export PATH="/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v${version}/bin:/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v${version}/libnvvp:$PATH"
19-
export CUDA_PATH_V${version/./_}="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v${version}"
20-
export CUDA_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v${version}"
32+
if [ ! -f "${CUDA_PATH}\include\nvjpeg.h" ]; then
33+
echo "nvjpeg does not exist"
34+
exit 1
35+
fi
2136

22-
if [ ! -d "$CUDA_PATH" ]
23-
then
24-
echo "$CUDA_PATH" does not exist
25-
exit 1
26-
fi
37+
# check cuda driver version
38+
for path in '/c/Program Files/NVIDIA Corporation/NVSMI/nvidia-smi.exe' /c/Windows/System32/nvidia-smi.exe; do
39+
if [[ -x "$path" ]]; then
40+
"$path" || echo "true";
41+
break
42+
fi
43+
done
2744

28-
# check cuda driver version
29-
for path in '/c/Program Files/NVIDIA Corporation/NVSMI/nvidia-smi.exe' /c/Windows/System32/nvidia-smi.exe; do
30-
if [[ -x "$path" ]]; then
31-
"$path" || echo "true";
32-
break
33-
fi
34-
done
35-
which nvcc
45+
which nvcc
46+
nvcc --version
47+
env | grep CUDA
48+
fi

.github/ISSUE_TEMPLATE/bug-report.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ body:
1212
description: |
1313
Please provide a clear and concise description of what the bug is.
1414
15-
If relevant, add a minimal example so that we can reproduce the error by running the code. It is very important for he snippet to be as succinct (minimal) as possible, so please take time to trim down any irrelevant code to help us debug efficiently. We are going to copy-paste your code and we expect to get the same result as you did: avoid any external data, and include the relevant imports, etc. For example:
15+
If relevant, add a minimal example so that we can reproduce the error by running the code. It is very important for the snippet to be as succinct (minimal) as possible, so please take time to trim down any irrelevant code to help us debug efficiently. We are going to copy-paste your code and we expect to get the same result as you did: avoid any external data, and include the relevant imports, etc. For example:
1616
1717
```python
1818
# All necessary imports at the beginning

.github/workflows/pr-labels.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ name: pr-labels
33
on:
44
push:
55
branches:
6-
- master
6+
- main
77

88
jobs:
99
is-properly-labeled:

README.rst

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -106,6 +106,20 @@ otherwise, add the include and library paths in the environment variables ``TORC
106106
.. _libjpeg: http://ijg.org/
107107
.. _libjpeg-turbo: https://libjpeg-turbo.org/
108108

109+
Video Backend
110+
=============
111+
Torchvision currently supports the following video backends:
112+
113+
* [pyav](https://github.com/PyAV-Org/PyAV) (default) - Pythonic binding for ffmpeg libraries.
114+
115+
* video_reader - This needs ffmpeg to be installed and torchvision to be built from source. There shouldn't be any conflicting version of ffmpeg installed. Currently, this is only supported on Linux.
116+
117+
.. code:: bash
118+
119+
conda install -c conda-forge ffmpeg
120+
python setup.py install
121+
122+
109123
Using the models on C++
110124
=======================
111125
TorchVision provides an example project for how to use the models on C++ using JIT Script.

docs/requirements.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
sphinx==3.5.4
2-
sphinx-gallery>=0.9.0
3-
sphinx-copybutton>=0.3.1
41
matplotlib
52
numpy
3+
sphinx-copybutton>=0.3.1
4+
sphinx-gallery>=0.9.0
5+
sphinx==3.5.4
66
-e git+git://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme

docs/source/feature_extraction.rst

Lines changed: 39 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ It works by following roughly these steps:
1919

2020
1. Symbolically tracing the model to get a graphical representation of
2121
how it transforms the input, step by step.
22-
2. Setting the user-selected graph nodes as ouputs.
23-
3. Removing all redundant nodes (anything downstream of the ouput nodes).
22+
2. Setting the user-selected graph nodes as outputs.
23+
3. Removing all redundant nodes (anything downstream of the output nodes).
2424
4. Generating python code from the resulting graph and bundling that into a
2525
PyTorch module together with the graph itself.
2626

@@ -30,6 +30,39 @@ The `torch.fx documentation <https://pytorch.org/docs/stable/fx.html>`_
3030
provides a more general and detailed explanation of the above procedure and
3131
the inner workings of the symbolic tracing.
3232

33+
.. _about-node-names:
34+
35+
**About Node Names**
36+
37+
In order to specify which nodes should be output nodes for extracted
38+
features, one should be familiar with the node naming convention used here
39+
(which differs slightly from that used in ``torch.fx``). A node name is
40+
specified as a ``.`` separated path walking the module hierarchy from top level
41+
module down to leaf operation or leaf module. For instance ``"layer4.2.relu"``
42+
in ResNet-50 represents the output of the ReLU of the 2nd block of the 4th
43+
layer of the ``ResNet`` module. Here are some finer points to keep in mind:
44+
45+
- When specifying node names for :func:`create_feature_extractor`, you may
46+
provide a truncated version of a node name as a shortcut. To see how this
47+
works, try creating a ResNet-50 model and printing the node names with
48+
``train_nodes, _ = get_graph_node_names(model) print(train_nodes)`` and
49+
observe that the last node pertaining to ``layer4`` is
50+
``"layer4.2.relu_2"``. One may specify ``"layer4.2.relu_2"`` as the return
51+
node, or just ``"layer4"`` as this, by convention, refers to the last node
52+
(in order of execution) of ``layer4``.
53+
- If a certain module or operation is repeated more than once, node names get
54+
an additional ``_{int}`` postfix to disambiguate. For instance, maybe the
55+
addition (``+``) operation is used three times in the same ``forward``
56+
method. Then there would be ``"path.to.module.add"``,
57+
``"path.to.module.add_1"``, ``"path.to.module.add_2"``. The counter is
58+
maintained within the scope of the direct parent. So in ResNet-50 there is
59+
a ``"layer4.1.add"`` and a ``"layer4.2.add"``. Because the addition
60+
operations reside in different blocks, there is no need for a postfix to
61+
disambiguate.
62+
63+
64+
**An Example**
65+
3366
Here is an example of how we might extract features for MaskRCNN:
3467

3568
.. code-block:: python
@@ -80,10 +113,10 @@ Here is an example of how we might extract features for MaskRCNN:
80113
# Now you can build the feature extractor. This returns a module whose forward
81114
# method returns a dictionary like:
82115
# {
83-
# 'layer1': ouput of layer 1,
84-
# 'layer2': ouput of layer 2,
85-
# 'layer3': ouput of layer 3,
86-
# 'layer4': ouput of layer 4,
116+
# 'layer1': output of layer 1,
117+
# 'layer2': output of layer 2,
118+
# 'layer3': output of layer 3,
119+
# 'layer4': output of layer 4,
87120
# }
88121
create_feature_extractor(m, return_nodes=return_nodes)
89122

0 commit comments

Comments
 (0)