Skip to content

Merge from upstream #189

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 71 commits into from
Sep 7, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
9ca63c5
Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (#11205)
ezyang Sep 5, 2018
56bdd87
Get rid of some uses of type() (#11215)
ezyang Sep 5, 2018
d1b920b
keep net type info when generating model complete net (#11032)
harouwu Sep 5, 2018
d4060d2
Implement torch.tensordot (#10025)
t-vi Sep 5, 2018
b7cd4b6
add a Float16UniformFill (#11123)
Sep 5, 2018
b7038f7
Treat numerical differences as warnings instead of errors when tracin…
apaszke Sep 5, 2018
6d6655e
Port PackedSequences functions to C++ (#11224)
apaszke Sep 5, 2018
8476972
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 5, 2018
d612855
nomnigraph - fix memory error in NN subgraph matchOp (#11127)
duc0 Sep 5, 2018
aeb6094
Unify opt flag for cmake codegen (#11227)
cpuhrsch Sep 5, 2018
1808e36
Add complex hooks for out of tree complex implementation. (#11216)
ezyang Sep 5, 2018
9fc22cb
Add import export step to end to end tests
Sep 5, 2018
434e943
Fix to distribution.__repr__ with lazy attributes (#11263)
neerajprad Sep 5, 2018
8bd80a6
Fixed log message (#10874)
Sep 5, 2018
267e1ec
Accept more numpy scalars as doubles (#9659)
t-vi Sep 5, 2018
e6d6aed
Check doxygen output in travis (#11124)
goldsborough Sep 5, 2018
5521250
Improve error message to include return types too (#11245)
apaszke Sep 5, 2018
f866574
Fix the batchnorm onnx exporting when affine=False
houseroad Sep 5, 2018
5e2067c
Fix some more warnings (#11257)
ssnl Sep 5, 2018
4fe3356
Move collapse dims into a single place (#11272)
cpuhrsch Sep 5, 2018
8da081f
Add cost inference to ConvGradient and WeightedSum operators (#10744)
Sep 5, 2018
9a0effb
Update send/recv tests to reflect intended use (#11275)
pietern Sep 5, 2018
68c2e01
Handling for py2/py3 division differences (#11016)
zou3519 Sep 5, 2018
3e85685
add persistent rnns with conservative criteria (#11248)
Sep 5, 2018
ac9f0a6
refactor preproc, support dense in TumHistory layer
Wakeupbuddy Sep 5, 2018
9f4bcdf
caffe2::DeviceType -> at::DeviceType (#11254)
jerryzh168 Sep 5, 2018
c9e6635
Port all PyTorch and Caffe2 jobs to CircleCI (#11264)
Sep 5, 2018
c431872
Small fixes to cppdocs for sync script (#11300)
goldsborough Sep 5, 2018
57728f7
nomnigraph - simplify core graph API and test (#11256)
duc0 Sep 5, 2018
c0efe6f
Forward declarations of needed curand functions (#10911)
pjh5 Sep 5, 2018
a9d8b02
Remove THFinalizer
cpuhrsch Sep 5, 2018
ad11621
typo fix Tranpose2D -> Transpose2D (#11281)
jspark1105 Sep 6, 2018
425ea6b
fix doc for functional.dropout* (#10417)
weiyangfb Sep 6, 2018
fa147ab
Add convertToCaffe2Proto to python API
bwasti Sep 6, 2018
83a1ab2
Sparse tensor printing; add NotImplemented autograd fn (#10181)
ssnl Sep 6, 2018
dccd0f2
Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd (#11050)
goldsborough Sep 6, 2018
fb836db
Fix conv gradient conversion (#11312)
orionr Sep 6, 2018
126ac4b
Back out "[pt1][tensor] Add strides to caffe2::Tensor"
jerryzh168 Sep 6, 2018
220c9e5
Distributed Data Parallel CPU module for C10D (#11168)
teng-li Sep 6, 2018
bb7d183
Add dead code elimination pass (#10101)
bwasti Sep 6, 2018
656e81d
Fix scalar tensor assert in fusion compiler (#10952)
zou3519 Sep 6, 2018
f6568b0
Change includes from ATen/Storage.h to ATen/core/Storage.h (#11217)
ezyang Sep 6, 2018
68930c4
Move minimal wrapdim functionality to core, remove THTensor include i…
gchanan Sep 6, 2018
dda8402
Cleanup dependency of distributed flags (#11221)
orionr Sep 6, 2018
a853a74
defer resolution of mkl to a cmake wrapper library (#11298)
anderspapitto Sep 6, 2018
4ae9573
Ignore FuseGraph Call on Windows (#11015)
Sep 6, 2018
936bba7
cudnn 7 upgrade with spatialBN fix (#11291)
xw285cornell Sep 6, 2018
0ef2b31
fix empty net type (#11286)
harouwu Sep 6, 2018
1ad61a1
Rename cuda tests to have 'cuda' in their names (#11332)
zou3519 Sep 6, 2018
f98bd53
Small fix to the UniformIntFill tensor shape and type inference.
Sep 6, 2018
ed8849b
Add include path to Doxygen preprocessing and add some documentation …
goldsborough Sep 6, 2018
0f1ec07
nomnigraph - nit - rename unit test files (#11315)
duc0 Sep 6, 2018
fef52cc
Add resolver for 'torch' module (#10847)
Sep 6, 2018
ec19512
Adding setTimeout option in Store (#11265)
teng-li Sep 6, 2018
5712fe3
Fix out-of-boundary conversion issue (#11338)
Sep 6, 2018
03ca735
Add unit test for Parallel Spatial Batch Normalization (#11098)
Sep 6, 2018
8577856
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 6, 2018
b58da7e
Merge branch 'master' into ifu
iotamudelta Sep 6, 2018
34c0043
Force third_party Eigen from setup.py (#11334)
orionr Sep 6, 2018
68613cf
Windows DLL build with Caffe2 code (#11266)
Yangqing Sep 6, 2018
1d406c0
fix comment on Cost params_bytes (#11190)
jspark1105 Sep 6, 2018
49231ab
Reimplement storage slicing. (#11314)
ezyang Sep 6, 2018
148f7cc
nomnigraph - nit - fix generated code to be consistent with style (#1…
duc0 Sep 6, 2018
4d67879
enable advanced indexing with tensors (#10862)
zou3519 Sep 6, 2018
c39216f
Automatic update of fbcode/onnx to bff0b8835870c7df7762ef43498d000d2d…
houseroad Sep 7, 2018
1a01c75
support gradClipping per blob in mtml (#10776)
Sep 7, 2018
7726b36
Full-fledged group testings and fixes for c10d frontend APIs (#11318)
teng-li Sep 7, 2018
ec5404a
Add cuda version of SpatialBNOp also optimize SpatialBN on CPU (#10888)
xiaomengy Sep 7, 2018
9de2085
Use custom hcc/HIP, purge hcSPARSE (#11198)
iotamudelta Sep 7, 2018
0f419ab
Roll nomnigraph build into caffe2 (#11303)
orionr Sep 7, 2018
2e2d049
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 7, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
928 changes: 925 additions & 3 deletions .circleci/config.yml

Large diffs are not rendered by default.

4 changes: 4 additions & 0 deletions .clang-tidy
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ Checks: '
*
,clang-analyzer-*
,modernize-*
,-cert-dcl21-cpp
,-cert-err58-cpp
,-cert-err60-cpp
,-clang-diagnostic-*
Expand All @@ -12,10 +13,12 @@ Checks: '
,-cppcoreguidelines-pro-bounds-constant-array-index
,-cppcoreguidelines-pro-type-member-init
,-cppcoreguidelines-pro-type-static-cast-downcast
,-cppcoreguidelines-pro-type-union-access
,-cppcoreguidelines-pro-type-vararg
,-cppcoreguidelines-special-member-functions
,-fuchsia-*
,-google-build-using-namespace
,-google-default-arguments
,-google-explicit-constructor
,-google-readability-braces-around-statements
,-google-readability-namespace-comments
Expand All @@ -24,6 +27,7 @@ Checks: '
,-google-runtime-references
,-hicpp-braces-around-statements
,-hicpp-explicit-conversions
,-hicpp-member-init
,-hicpp-no-array-decay
,-hicpp-signed-bitwise
,-hicpp-special-member-functions
Expand Down
16 changes: 9 additions & 7 deletions .jenkins/caffe2/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,15 @@ if [ -z "${SCCACHE}" ] && which ccache > /dev/null; then
export PATH="$CACHE_WRAPPER_DIR:$PATH"
fi

# sccache will fail for CUDA builds if all cores are used for compiling
if [ -z "$MAX_JOBS" ]; then
if [[ "${BUILD_ENVIRONMENT}" == *-cuda* ]] && [ -n "${SCCACHE}" ]; then
MAX_JOBS=`expr $(nproc) - 1`
else
MAX_JOBS=$(nproc)
fi
fi

report_compile_cache_stats() {
if [[ -n "${SCCACHE}" ]]; then
"$SCCACHE" --show-stats
Expand Down Expand Up @@ -184,13 +193,6 @@ if [[ -x "$(command -v cmake3)" ]]; then
else
CMAKE_BINARY=cmake
fi
# sccache will fail for CUDA builds if all cores are used for compiling
if [[ "${BUILD_ENVIRONMENT}" == *-cuda* ]] && [ -n "${SCCACHE}" ]; then
MAX_JOBS=`expr $(nproc) - 1`
else
MAX_JOBS=$(nproc)
fi


###############################################################################
# Configure and make
Expand Down
10 changes: 6 additions & 4 deletions .jenkins/pytorch/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@
if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-* ]]; then
# TODO: move this to Docker
sudo apt-get update
sudo apt-get install libnccl-dev=2.2.13-1+cuda9.0 libnccl2=2.2.13-1+cuda9.0
sudo apt-get install -y --allow-downgrades --allow-change-held-packages libnccl-dev=2.2.13-1+cuda9.0 libnccl2=2.2.13-1+cuda9.0
fi

if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda8-* ]] || [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-cudnn7-py2* ]]; then
# TODO: move this to Docker
sudo apt-get update
sudo apt-get install openmpi-bin libopenmpi-dev
sudo apt-get install -y --allow-downgrades --allow-change-held-packages openmpi-bin libopenmpi-dev
sudo apt-get install -y --no-install-recommends openssh-client openssh-server
sudo mkdir -p /var/run/sshd
fi
Expand Down Expand Up @@ -72,8 +72,10 @@ fi

# sccache will fail for CUDA builds if all cores are used for compiling
# gcc 7 with sccache seems to have intermittent OOM issue if all cores are used
if ([[ "$BUILD_ENVIRONMENT" == *cuda* ]] || [[ "$BUILD_ENVIRONMENT" == *gcc7* ]]) && which sccache > /dev/null; then
export MAX_JOBS=`expr $(nproc) - 1`
if [ -z "$MAX_JOBS" ]; then
if ([[ "$BUILD_ENVIRONMENT" == *cuda* ]] || [[ "$BUILD_ENVIRONMENT" == *gcc7* ]]) && which sccache > /dev/null; then
export MAX_JOBS=`expr $(nproc) - 1`
fi
fi

# Target only our CI GPU machine's CUDA arch to speed up the build
Expand Down
18 changes: 12 additions & 6 deletions .jenkins/pytorch/macos-build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,15 @@ if [[ "${JOB_BASE_NAME}" == *cuda9.2* ]]; then
export CUDA_HOME=/Developer/NVIDIA/CUDA-${CUDA_VERSION}
export NO_CUDA=0

# Eigen gives "explicit specialization of class must precede its first use" error
# when compiling with Xcode 9.1 toolchain, so we have to use Xcode 8.2 toolchain instead.
export DEVELOPER_DIR=/Library/Developer/CommandLineTools
if [ -z "${IN_CIRCLECI}" ]; then
# Eigen gives "explicit specialization of class must precede its first use" error
# when compiling with Xcode 9.1 toolchain, so we have to use Xcode 8.2 toolchain instead.
export DEVELOPER_DIR=/Library/Developer/CommandLineTools
fi
else
export DEVELOPER_DIR=/Applications/Xcode9.app/Contents/Developer
if [ -z "${IN_CIRCLECI}" ]; then
export DEVELOPER_DIR=/Applications/Xcode9.app/Contents/Developer
fi
fi

export MACOSX_DEPLOYMENT_TARGET=10.9
Expand Down Expand Up @@ -62,5 +66,7 @@ export IMAGE_COMMIT_TAG=${BUILD_ENVIRONMENT}-${IMAGE_COMMIT_ID}
python setup.py install

# Upload torch binaries when the build job is finished
7z a ${IMAGE_COMMIT_TAG}.7z ${PYTORCH_ENV_DIR}/miniconda3/lib/python3.6/site-packages/torch*
aws s3 cp ${IMAGE_COMMIT_TAG}.7z s3://ossci-macos-build/pytorch/${IMAGE_COMMIT_TAG}.7z --acl public-read
if [ -z "${IN_CIRCLECI}" ]; then
7z a ${IMAGE_COMMIT_TAG}.7z ${PYTORCH_ENV_DIR}/miniconda3/lib/python3.6/site-packages/torch*
aws s3 cp ${IMAGE_COMMIT_TAG}.7z s3://ossci-macos-build/pytorch/${IMAGE_COMMIT_TAG}.7z --acl public-read
fi
26 changes: 16 additions & 10 deletions .jenkins/pytorch/macos-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,18 +16,22 @@ fi
export PATH="${PYTORCH_ENV_DIR}/miniconda3/bin:$PATH"
source ${PYTORCH_ENV_DIR}/miniconda3/bin/activate
conda install -y mkl mkl-include numpy pyyaml setuptools cmake cffi ninja
rm -rf ${PYTORCH_ENV_DIR}/miniconda3/lib/python3.6/site-packages/torch*
if [ -z "${IN_CIRCLECI}" ]; then
rm -rf ${PYTORCH_ENV_DIR}/miniconda3/lib/python3.6/site-packages/torch*
fi

git submodule update --init --recursive
export CMAKE_PREFIX_PATH=${PYTORCH_ENV_DIR}/miniconda3/

# Test PyTorch
if [[ "${JOB_BASE_NAME}" == *cuda9.2* ]]; then
# Eigen gives "explicit specialization of class must precede its first use" error
# when compiling with Xcode 9.1 toolchain, so we have to use Xcode 8.2 toolchain instead.
export DEVELOPER_DIR=/Library/Developer/CommandLineTools
else
export DEVELOPER_DIR=/Applications/Xcode9.app/Contents/Developer
if [ -z "${IN_CIRCLECI}" ]; then
if [[ "${JOB_BASE_NAME}" == *cuda9.2* ]]; then
# Eigen gives "explicit specialization of class must precede its first use" error
# when compiling with Xcode 9.1 toolchain, so we have to use Xcode 8.2 toolchain instead.
export DEVELOPER_DIR=/Library/Developer/CommandLineTools
else
export DEVELOPER_DIR=/Applications/Xcode9.app/Contents/Developer
fi
fi
export MACOSX_DEPLOYMENT_TARGET=10.9
export CXX=clang++
Expand All @@ -38,9 +42,11 @@ export MAX_JOBS=2
export IMAGE_COMMIT_TAG=${BUILD_ENVIRONMENT}-${IMAGE_COMMIT_ID}

# Download torch binaries in the test jobs
rm -rf ${PYTORCH_ENV_DIR}/miniconda3/lib/python3.6/site-packages/torch*
aws s3 cp s3://ossci-macos-build/pytorch/${IMAGE_COMMIT_TAG}.7z ${IMAGE_COMMIT_TAG}.7z
7z x ${IMAGE_COMMIT_TAG}.7z -o"${PYTORCH_ENV_DIR}/miniconda3/lib/python3.6/site-packages"
if [ -z "${IN_CIRCLECI}" ]; then
rm -rf ${PYTORCH_ENV_DIR}/miniconda3/lib/python3.6/site-packages/torch*
aws s3 cp s3://ossci-macos-build/pytorch/${IMAGE_COMMIT_TAG}.7z ${IMAGE_COMMIT_TAG}.7z
7z x ${IMAGE_COMMIT_TAG}.7z -o"${PYTORCH_ENV_DIR}/miniconda3/lib/python3.6/site-packages"
fi

test_python_all() {
echo "Ninja version: $(ninja --version)"
Expand Down
17 changes: 17 additions & 0 deletions .jenkins/pytorch/multigpu-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,21 @@ COMPACT_JOB_NAME="${BUILD_ENVIRONMENT}-multigpu-test"
source "$(dirname "${BASH_SOURCE[0]}")/common.sh"

echo "Testing pytorch (distributed only)"

if [ -n "${IN_CIRCLECI}" ]; then
if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-* ]]; then
# TODO: move this to Docker
sudo apt-get update
sudo apt-get install -y --allow-downgrades --allow-change-held-packages libnccl-dev=2.2.13-1+cuda9.0 libnccl2=2.2.13-1+cuda9.0
fi

if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda8-* ]] || [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-cudnn7-py2* ]]; then
# TODO: move this to Docker
sudo apt-get update
sudo apt-get install -y --allow-downgrades --allow-change-held-packages openmpi-bin libopenmpi-dev
sudo apt-get install -y --no-install-recommends openssh-client openssh-server
sudo mkdir -p /var/run/sshd
fi
fi

time python test/run_test.py --verbose -i distributed
16 changes: 16 additions & 0 deletions .jenkins/pytorch/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,22 @@ source "$(dirname "${BASH_SOURCE[0]}")/common.sh"

echo "Testing pytorch"

if [ -n "${IN_CIRCLECI}" ]; then
if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-* ]]; then
# TODO: move this to Docker
sudo apt-get update
sudo apt-get install -y --allow-downgrades --allow-change-held-packages libnccl-dev=2.2.13-1+cuda9.0 libnccl2=2.2.13-1+cuda9.0
fi

if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda8-* ]] || [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-cudnn7-py2* ]]; then
# TODO: move this to Docker
sudo apt-get update
sudo apt-get install -y --allow-downgrades --allow-change-held-packages openmpi-bin libopenmpi-dev
sudo apt-get install -y --no-install-recommends openssh-client openssh-server
sudo mkdir -p /var/run/sshd
fi
fi

# JIT C++ extensions require ninja.
git clone https://github.com/ninja-build/ninja --quiet
pushd ninja
Expand Down
3 changes: 3 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,3 +26,6 @@ matrix:
python: "3.6"
install: pip install mypy mypy-extensions
script: mypy @mypy-files.txt
- env: CPP_DOC_CHECK
install: sudo apt-get install -y doxygen
script: cd docs/cpp && ./check-doxygen.sh
23 changes: 11 additions & 12 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -82,14 +82,11 @@ cmake_dependent_option(
option(USE_FFMPEG "Use ffmpeg" OFF)
option(USE_GFLAGS "Use GFLAGS" ON)
option(USE_GLOG "Use GLOG" ON)
option(USE_GLOO "Use Gloo" ON)
option(USE_GLOO_IBVERBS "Use Gloo IB verbs for distributed support" OFF)
option(USE_LEVELDB "Use LEVELDB" ON)
option(USE_LITE_PROTO "Use lite protobuf instead of full." OFF)
option(USE_LMDB "Use LMDB" ON)
option(USE_METAL "Use Metal for iOS build" ON)
option(USE_MOBILE_OPENGL "Use OpenGL for mobile code" ON)
option(USE_MPI "Use MPI" ON)
option(USE_NATIVE_ARCH "Use -march=native" OFF)
option(USE_NCCL "Use NCCL" ON)
option(USE_SYSTEM_NCCL "Use system-wide NCCL" OFF)
Expand All @@ -116,7 +113,16 @@ option(USE_ZSTD "Use ZSTD" OFF)
option(USE_MKLDNN "Use MKLDNN" OFF)
option(USE_IDEEP "Use IDEEP interface in MKL BLAS" ON)
option(USE_MKLML "Use MKLML interface in MKL BLAS" ON)
option(USE_DISTRIBUTED "Use THD (distributed)" OFF)
option(USE_DISTRIBUTED "Use distributed" ON)
cmake_dependent_option(
USE_MPI "Use MPI. Only available if USE_DISTRIBUTED is on." ON
"USE_DISTRIBUTED" OFF)
cmake_dependent_option(
USE_GLOO "Use Gloo. Only available if USE_DISTRIBUTED is on." ON
"USE_DISTRIBUTED" OFF)
cmake_dependent_option(
USE_GLOO_IBVERBS "Use Gloo IB verbs for distributed. Only available if USE_GLOO is on." OFF
"USE_GLOO" OFF)

# Used when building Caffe2 through setup.py
option(BUILDING_WITH_TORCH_LIBS "Tell cmake if Caffe2 is being built alongside torch libs" OFF)
Expand Down Expand Up @@ -378,6 +384,7 @@ if (BUILD_SHARED_LIBS)
${PROJECT_SOURCE_DIR}/cmake/public/cuda.cmake
${PROJECT_SOURCE_DIR}/cmake/public/glog.cmake
${PROJECT_SOURCE_DIR}/cmake/public/gflags.cmake
${PROJECT_SOURCE_DIR}/cmake/public/mkl.cmake
${PROJECT_SOURCE_DIR}/cmake/public/protobuf.cmake
${PROJECT_SOURCE_DIR}/cmake/public/threads.cmake
${PROJECT_SOURCE_DIR}/cmake/public/utils.cmake
Expand All @@ -397,24 +404,16 @@ else()
endif()

# ---[ Modules
# TODO(orionr): Enable all of this for Windows DLL when we
# can figure out how to get it to build
if (NOT (MSVC AND BUILD_SHARED_LIBS))
add_subdirectory(modules)
endif()

# ---[ Binaries
# Binaries will be built after the Caffe2 main libraries and the modules
# are built. For the binaries, they will be linked to the Caffe2 main
# libraries, as well as all the modules that are built with Caffe2 (the ones
# built in the previous Modules section above).
# TODO(orionr): Enable all of this for Windows DLL when we
# can figure out how to get it to build
if (NOT (MSVC AND BUILD_SHARED_LIBS))
if (BUILD_BINARY)
add_subdirectory(binaries)
endif()
endif()

include(cmake/Summary.cmake)
caffe2_print_configuration_summary()
1 change: 1 addition & 0 deletions CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
/aten/ @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/torch/ @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/docs/source @apaszke @soumith @colesbury @gchanan @zdevito @ezyang @ssnl @zou3519
/docs/cpp @goldsborough @ebetica @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/test @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/tools @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/README.md @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
Expand Down
12 changes: 12 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,18 @@ PyTorch uses [Google style](http://sphinxcontrib-napoleon.readthedocs.io/en/late
for formatting docstrings. Length of line inside docstrings block must be limited to 80 characters to
fit into Jupyter documentation popups.

For C++ documentation (https://pytorch.org/cppdocs), we use
[Doxygen](http://www.doxygen.nl/) and then convert it to
[Sphinx](http://www.sphinx-doc.org/) via
[Breathe](https://github.com/michaeljones/breathe) and
[Exhale](https://github.com/svenevs/exhale). Check the [Doxygen
reference](http://www.stack.nl/~dimitri/doxygen/manual/index.html) for more
information on the documentation syntax. To build the documentation locally,
`cd` into `docs/cpp` and then `make html`.

We run Doxygen in CI (Travis) to verify that you do not use invalid Doxygen
commands. To run this check locally, run `./check-doxygen.sh` from inside
`docs/cpp`.

## Managing multiple build trees

Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/ATen.h
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
#include "ATen/OptionsGuard.h"
#include "ATen/core/Scalar.h"
#include "ATen/ScalarOps.h"
#include "ATen/Storage.h"
#include "ATen/core/Storage.h"
#include "ATen/Tensor.h"
#include "ATen/TensorGeometry.h"
#include "ATen/TensorMethods.h"
Expand Down
Loading