Skip to content

sync w/ upstream #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 149 commits into from
Jun 22, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
149 commits
Select commit Hold shift + click to select a range
2ab4c9d
DEPRECATED -> AT_DEPRECATED (#8496)
goldsborough Jun 14, 2018
ae55865
Migrated hardshrink() to ATen and deprecated nn.Hardshrink() (#8117)
weiyangfb Jun 14, 2018
6287b80
[auto] Update onnx to 3ca20e6 - Remove obsolete installation doc. (on…
onnxbot Jun 14, 2018
edc3000
Move empty size logic from ATen into TH/THC. (#8468)
gchanan Jun 14, 2018
6869a5f
Throw error on 0-length tensor slicing (#7775)
li-roy Jun 14, 2018
34c9d16
[JIT] End-to-end example-based robustness testing for hybrid frontend…
Jun 14, 2018
544605d
[JIT] Remove TK_WHERE (#8536)
Jun 14, 2018
54c456d
Improve win-build.sh for Windows local build (#8493)
yf225 Jun 15, 2018
302408e
Support BatchNormalization opset 7 (#8482)
houseroad Jun 15, 2018
848873e
Must run apt-get install as sudo. (#8454)
ezyang Jun 15, 2018
829bcf3
Don't apply PR 12 to Thrust anymore. (#8542)
ezyang Jun 15, 2018
3a1265c
[auto] Update onnx to 578a439 - Add Node Test for InstanceNormalizati…
onnxbot Jun 15, 2018
a8bf30d
caffe2 hip python binding (#8491)
bddppq Jun 15, 2018
55de546
[auto] Update onnx to c647994 - fix upper-bound for local-region in l…
onnxbot Jun 15, 2018
677739c
Fix createZerosLike for scalars (#8537)
Jun 15, 2018
5a31f73
[auto] Update onnx to b70ee6a - Make RNN/LSTM/GRU treatment of recurr…
onnxbot Jun 15, 2018
4e3ada1
[auto] Update onnx to d9fc1b1 - Add Node test for BatchNormalization …
onnxbot Jun 15, 2018
0965e8e
[auto] Update onnx to 0125af3 - Add node test for Dropout (onnx/onnx#…
onnxbot Jun 15, 2018
7251d70
fixed THD NO_CUDA (#8539)
soumith Jun 15, 2018
b002aee
Disable verbose logging for PyTorch ROCm nightly builds. (#8517)
Jorghi12 Jun 15, 2018
d769074
Fix the formula of some norms (#8545)
houseroad Jun 15, 2018
dc186cc
Remove NO_* and WITH_* across codebase, except in setup.py (#8555)
soumith Jun 15, 2018
ec23ee6
add order switch op to nomnigraph (#8436)
seravee Jun 15, 2018
c457fc9
Adding pyyaml to Ubuntu and Centos docker images (#8490)
pjh5 Jun 15, 2018
c537fd7
fix lint (#8567)
ssnl Jun 15, 2018
711e5a6
Port THS to ATen. (#8409)
ezyang Jun 15, 2018
d968614
Enable open registration of VariableType objects (#8540)
zdevito Jun 15, 2018
b10c94b
Update operator documentation with markdown descriptions and interfac…
MatthewInkawhich Jun 15, 2018
682dec2
add relu to jit and exp to autodiff (#8573)
Jun 15, 2018
7b2ad88
Eliminates noisy assert spew when running test_cuda.py (#8531)
mruberry Jun 15, 2018
26bed6d
assert limit on cudnn grid_sampler (#8576)
li-roy Jun 16, 2018
92f67d9
fix lint
soumith Jun 16, 2018
c9b8d85
Added flip() fn in ATen (CPU + CUDA) (#7873)
weiyangfb Jun 16, 2018
372d1d6
Create ATen tensors via TensorOptions (#7869)
goldsborough Jun 16, 2018
0ae8b6c
add fold example and add nn.Fold/nn.Unfold and F.fold/F.unfold to doc…
t-vi Jun 18, 2018
d813ffc
Dont show Python frames in backtrace (#8579)
goldsborough Jun 18, 2018
c1d04c7
Implement non-legacy TH/THC resize, with pseudo 0-sized dimension sup…
gchanan Jun 18, 2018
88db4c8
Disable flaky Chaining tests (#8601)
ezyang Jun 18, 2018
e62c3a4
[Caffe2] Make cmake find current Python first (#8569)
pjh5 Jun 18, 2018
2039c7a
Fix test_rnn_args_check (#8606)
zou3519 Jun 18, 2018
ae25737
Add kwarg support to test_autograd and stop using deprecated schema f…
Jun 18, 2018
90532d5
Don't use MKL VML for log2 if below MKL build 20180406 (#8614)
cpuhrsch Jun 18, 2018
0a5fe55
[auto] Update onnx to 53edd9e - Exclude Random Generator from Test Co…
onnxbot Jun 18, 2018
11ea817
Remove all resizeLegacy calls, except for catArray. (#8616)
gchanan Jun 18, 2018
e4f2542
apt update before installing nccl2 (#8624)
ezyang Jun 18, 2018
525aa74
Improve check for addmm in autodiff (#8575)
Jun 18, 2018
a7bf539
[JIT] add missing check for excluding tensor method tests (#8617)
Jun 18, 2018
10961a5
Add OpenMPI for MPI tests. (#8625)
ezyang Jun 18, 2018
4f37a64
Fix DeviceGuard usage in THD (#8622)
goldsborough Jun 18, 2018
05c473b
Temporarily remove TBB (#8255)
cpuhrsch Jun 18, 2018
c44c95f
New operator 'expand' (#8263)
zrphercule Jun 18, 2018
6307c11
Fix const type qualifier warning (#8613)
bddppq Jun 18, 2018
2289815
Make CI green again (#8631)
teng-li Jun 19, 2018
d365158
Simplify pthreadpool implementation on top of Caffe2 thread pool (#7666)
Maratyszcza Jun 19, 2018
271406f
[C++ API] Make pImpl easy to use in modules to enable happy reference…
goldsborough Jun 19, 2018
f14887a
check for exact shape match before loading (#8619)
ailzhang Jun 19, 2018
1ac1a9d
update doc for comparison operators (#8636)
ailzhang Jun 19, 2018
32bc28d
caffe2 export (#8642)
kittipatv Jun 19, 2018
5f64484
update to avoid potential duplicate error msg (#8638)
ailzhang Jun 19, 2018
9a9eada
explicitly check device for grid_sampler (fixes: #8599) (#8646)
t-vi Jun 19, 2018
b8b051c
change avg_pool2/3d count_include_pad default to what it is in the do…
t-vi Jun 19, 2018
c80a703
Add CODEOWNERS entry for third_party to track changes (#8654)
orionr Jun 19, 2018
65f7797
typo corrected (#8632)
zrphercule Jun 19, 2018
6cc7670
Port all indirect calls of resizeNdLegacy to resizeNd. (#8603)
gchanan Jun 19, 2018
7ccecbb
Create Tensor::options (#8630)
goldsborough Jun 19, 2018
a2dd707
[C++ API] Create fixed width dtypes in torch:: namespace (#8639)
goldsborough Jun 19, 2018
5ca4f5b
[JIT] Remove dead functions (#8658)
Jun 19, 2018
61c9681
[c10d] NCCL python binding and CI test, with bug fixes (#8357)
teng-li Jun 19, 2018
a60540e
Make NCCL build select NVCC_GENCODE smarter (#8615)
ssnl Jun 19, 2018
2bf8b70
Fix broadcast copying device[0] tensor when not using NCCL (#8222)
ssnl Jun 19, 2018
03f7289
Add CAFFE2_USE_CUDNN guard on context_gpu.cu (#8657)
Yangqing Jun 19, 2018
7a048cd
Vectorize non-contiguous unary operations (#8488)
cpuhrsch Jun 19, 2018
d3b690e
TensorTypeId (#8389)
smessmer Jun 19, 2018
4608aa3
Setup wrappers to get vectorized version of mean (#8618)
cpuhrsch Jun 19, 2018
66e8ecf
16bit typeid (#8534)
smessmer Jun 19, 2018
d46312f
Create at::from_blob (#8640)
goldsborough Jun 20, 2018
637dcdc
Remove dangling inclusion path (#8671)
Yangqing Jun 20, 2018
8e4fe5d
Fix serialization for Parameters (#8633)
li-roy Jun 20, 2018
7fa81d6
Use parallel if get_num_threads 0 (#8677)
cpuhrsch Jun 20, 2018
be3e3f2
don't do unnecessary copies for bernoulli_ (#8682)
Jun 20, 2018
6402a42
Improve win-build.sh for local build (#8674)
yf225 Jun 20, 2018
695fd98
Compatibility: write nDimension/_nDimension corresponding to dim()/_d…
gchanan Jun 20, 2018
0e0031e
Fix build error in pybind_state_ideep (#8684)
gujinghui Jun 20, 2018
3da2731
Export ProcessGroupGloo options to Python (#8664)
pietern Jun 20, 2018
61b863c
Fix parsing of floating point defaults in python_arg_parser (#8681)
li-roy Jun 20, 2018
d97c9dd
Add a warning in gradcheck if inputs precision < float64 (#8663)
vishwakftw Jun 20, 2018
065fdbd
Created Tensor::to functions (#8643)
goldsborough Jun 20, 2018
cc6b046
Implement flatten function (#8578)
li-roy Jun 20, 2018
b6af5d4
Some 0-sized dimension support, port catArray away from resizeLegacy.…
gchanan Jun 20, 2018
b492d10
fix formatting in :math: in fold docstring (#8696)
JackLangerman Jun 20, 2018
08c1770
Add owner rule for cpp_extension.py (#8700)
goldsborough Jun 20, 2018
b4cd9f2
Clarify mp note about sharing a tensor's grad field. (#8688)
zou3519 Jun 20, 2018
9335885
Create at::tensor (#8475)
goldsborough Jun 20, 2018
d6c873a
Shard test_nn to reduce runtime for each test target (#8678)
yf225 Jun 20, 2018
73ce21a
Create captured inputs recursively for loop to resolve loop-carried d…
wanchaol Jun 20, 2018
3e25b4a
Fix #8692 (#8699)
vishwakftw Jun 20, 2018
5642937
more formatting (#8701)
JackLangerman Jun 20, 2018
f9da3aa
[auto] Update onnx to b1571d8 - ONNXIFI loader library (onnx/onnx#556)
onnxbot Jun 20, 2018
8546815
Implement OpSchema and a default DispatchKey (#8662)
smessmer Jun 20, 2018
0acddd6
Add torch.cuda.cudnn_is_available (#8703)
goldsborough Jun 20, 2018
48e90e3
Build system changes (#8627)
anderspapitto Jun 20, 2018
544690b
Update rnn.py (#8705)
anderspapitto Jun 20, 2018
17784d2
Make at::tensor faster (#8709)
goldsborough Jun 20, 2018
d00c79f
Improve cudnn RNN backward error message in eval mode (#8706)
ssnl Jun 20, 2018
8029296
[JIT] Improve test coverage for ErrorReport instances (#8668)
Jun 20, 2018
1e570fa
Add c10d/Def.hpp placeholder (#8711)
pietern Jun 20, 2018
f037d39
Support n-dimensional empty tensors in (most of) THNN. (#8702)
gchanan Jun 20, 2018
d79711d
[auto] Update onnx to 068f1a4 - Optimization pass to fuse batch norma…
onnxbot Jun 20, 2018
6181979
[auto] Update onnx to 7558954 - Use cmath instead of math.h (onnx/onn…
onnxbot Jun 21, 2018
35e66ef
Don't set HIP flags on non-HIP build. (#8728)
ezyang Jun 21, 2018
4f604a4
Export tensor descriptor (#8313)
bstriner Jun 21, 2018
bbd71a7
[auto] Update onnx to 9b9f595 - Make axis optional (onnx/onnx#1128)
onnxbot Jun 21, 2018
ac068fd
Use env var to pass sharding options to test_nn.py (#8727)
yf225 Jun 21, 2018
9dffaf5
ROCm 1.8.2 does not define CUBLAS_STATUS_ARCH_MISMATCH (#8732)
ezyang Jun 21, 2018
9b46531
Support n-dimensional empty tensors in more of TH/THC. (#8726)
gchanan Jun 21, 2018
c0dfe23
Support n-dimensional empty tensors in (most of) THCUNN. (#8722)
gchanan Jun 21, 2018
98a7d84
Link to C++ extensions in README.md (#8737)
goldsborough Jun 21, 2018
117b77e
Install vim by default on all Caffe2 docker images. (#8731)
ezyang Jun 21, 2018
b300934
Add CUDA 9.2 + GCC 7 build and test to CI (#8592)
yf225 Jun 21, 2018
e07a49e
Set DEBUG=1 in trusty-py3.6-gcc5.4 CI build (#8593)
yf225 Jun 21, 2018
40262ca
Disable flaky test_lstm_fusion_cpu test (#8747)
ezyang Jun 21, 2018
c8cc246
[JIT] Tests for calling between different frontend modes (#8704)
Jun 21, 2018
be3d65a
i2h<->h2h in gif (#8750)
ssnl Jun 21, 2018
3de45f3
Add ssnl and zou3519 as pytorch doc owner (#8754)
ssnl Jun 21, 2018
8489c4c
Better support for literals in jit script (#8687)
zou3519 Jun 21, 2018
41c08fe
Add tools/shared/_utils_internal.py to gitignore (#8756)
goldsborough Jun 21, 2018
2bb7e48
Define conversions and operations on at::Half (#8660)
colesbury Jun 21, 2018
709c300
[c10d] Configurable number of algorithm entries per key (#8765)
pietern Jun 21, 2018
dc5837a
[JIT] Adds fp16 support to the jit (#8679)
mruberry Jun 21, 2018
54a2e81
[auto] Update onnx to bc986de - Add is_compatible method in python ba…
onnxbot Jun 22, 2018
0750967
Adjust nested parallelization to deal with OMP (#8723)
cpuhrsch Jun 22, 2018
fd32cc6
Disable sccache when building NCCL (#8708)
yf225 Jun 22, 2018
53c0de5
Document ideal vs actual SparseTensorImpl invariants. (#8776)
gchanan Jun 22, 2018
bd95f8f
Resolve name conflict of ContextManager (#7244)
xush6528 Jun 22, 2018
83f846f
[auto] Update onnx to 410530e - Make test suite backward compatible (…
onnxbot Jun 22, 2018
9c42679
Expose is_compatible function (#8783)
houseroad Jun 22, 2018
5a7b484
Move nanopb-generated ONNX to unique file name (#8773)
orionr Jun 22, 2018
ce13ca2
added default lambd=0.5 for hardshrink (#8770)
weiyangfb Jun 22, 2018
fed44cb
Remove aten project for main build (#8532)
orionr Jun 22, 2018
e6c7b38
Cache cufft plans (#8344)
ssnl Jun 22, 2018
b1b77c9
Use virtual dtor for Annotation (#8780)
Jun 22, 2018
ddda7cf
allow output_size to contain None in adaptive pooling methods (#8596)
ailzhang Jun 22, 2018
f138111
remove unused flag (#8779)
li-roy Jun 22, 2018
d3ec956
Revert "ROCm 1.8.2 does not define CUBLAS_STATUS_ARCH_MISMATCH (#8732…
ezyang Jun 22, 2018
675b579
cmake wrapper (#8797)
ssnl Jun 22, 2018
1d4cf09
Add CUDA to logspace and linspace declarations in Declarations.cwrap …
vishwakftw Jun 22, 2018
73b9247
[README.md] Use GitLab URL for CMake (#8799)
ssnl Jun 22, 2018
46bff5d
Set MKL VML error mode to ignore (#8800)
cpuhrsch Jun 22, 2018
e1d6d1f
Merge remote-tracking branch 'upstream/master'
iotamudelta Jun 22, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
73 changes: 37 additions & 36 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,58 +8,59 @@

## PyTorch

build/
dist/
torch.egg-info/
.mypy_cache
*/*.pyc
*/*.so*
*/**/__pycache__
*/**/*.dylib*
*/**/*.pyc
*/**/*.pyd
*/**/*.so*
*/**/**/*.pyc
*/**/**/**/*.pyc
*/**/**/**/**/*.pyc
aten/build/
aten/src/ATen/Config.h
aten/src/ATen/cuda/CUDAConfig.h
build/
dist/
docs/src/**/*
test/.coverage
test/cpp/api/mnist
test/data/gpu_tensors.pt
test/data/legacy_modules.t7
test/data/legacy_serialized.pt
test/data/linear.pt
test/htmlcov
third_party/build/
torch/version.py
tools/shared/_utils_internal.py
torch.egg-info/
torch/csrc/autograd/generated/*
torch/csrc/cudnn/cuDNN.cpp
torch/csrc/generated
torch/csrc/generic/TensorMethods.cpp
torch/lib/*.so*
torch/csrc/jit/generated/*
torch/csrc/nn/THCUNN.cpp
torch/csrc/nn/THCUNN.cwrap
torch/csrc/nn/THNN_generic.cpp
torch/csrc/nn/THNN_generic.cwrap
torch/csrc/nn/THNN_generic.h
torch/csrc/nn/THNN.cpp
torch/csrc/nn/THNN.cwrap
torch/lib/*.a*
torch/lib/*.dll*
torch/lib/*.lib
torch/lib/*.dylib*
torch/lib/*.h
torch/lib/*.lib
torch/lib/*.so*
torch/lib/build
torch/lib/cmake
torch/lib/include
torch/lib/pkgconfig
torch/lib/protoc
torch/lib/tmp_install
torch/lib/include
torch/lib/torch_shm_manager
torch/csrc/jit/generated/*
torch/csrc/autograd/generated/*
torch/csrc/cudnn/cuDNN.cpp
torch/csrc/nn/THNN.cwrap
torch/csrc/nn/THNN.cpp
torch/csrc/nn/THCUNN.cwrap
torch/csrc/nn/THCUNN.cpp
torch/csrc/nn/THNN_generic.cwrap
torch/csrc/nn/THNN_generic.cpp
torch/csrc/nn/THNN_generic.h
torch/csrc/generated
docs/src/**/*
test/data/legacy_modules.t7
test/data/gpu_tensors.pt
test/htmlcov
test/.coverage
*/*.pyc
*/**/*.pyc
*/**/**/*.pyc
*/**/**/**/*.pyc
*/**/**/**/**/*.pyc
*/*.so*
*/**/*.so*
*/**/*.dylib*
*/**/*.pyd
test/data/legacy_serialized.pt
test/data/linear.pt
.mypy_cache
test/cpp/api/mnist
torch/version.py

# IPython notebook checkpoints
.ipynb_checkpoints
Expand Down
4 changes: 0 additions & 4 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
[submodule "third_party/tbb"]
path = third_party/tbb
url = https://github.com/01org/tbb
branch = tbb_2018
[submodule "third_party/catch"]
path = third_party/catch
url = https://github.com/catchorg/Catch2.git
Expand Down
17 changes: 12 additions & 5 deletions .jenkins/pytorch/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,11 @@ if [[ "$BUILD_ENVIRONMENT" == "pytorch-linux-xenial-py3-clang5-asan" ]]; then
exec "$(dirname "${BASH_SOURCE[0]}")/build-asan.sh" $*
fi

# Add nccl2 for distributed test.
apt-get install libnccl-dev libnccl2
# TODO: move this to Docker
# TODO: add both NCCL and MPI in CI test by fixing these test first
# sudo apt-get update
# sudo apt-get install libnccl-dev libnccl2
# sudo apt-get install openmpi-bin libopenmpi-dev

# Required environment variable: $BUILD_ENVIRONMENT
# (This is set by default in the Docker images we build, so you don't
Expand Down Expand Up @@ -36,7 +39,7 @@ if [[ "$BUILD_ENVIRONMENT" == *rocm* ]]; then
sudo chown -R jenkins:jenkins /usr/local
rm -rf "$(dirname "${BASH_SOURCE[0]}")/../../../pytorch_amd/" || true
python "$(dirname "${BASH_SOURCE[0]}")/../../tools/amd_build/build_pytorch_amd.py"
HIPCC_VERBOSE=7 VERBOSE=1 WITH_ROCM=1 python setup.py install
USE_ROCM=1 python setup.py install
exit
fi

Expand All @@ -46,14 +49,18 @@ if ! which conda; then
fi

# sccache will fail for CUDA builds if all cores are used for compiling
# gcc 7.2 with sccache seems to have intermittent OOM issue if all cores are used
if ([[ "$BUILD_ENVIRONMENT" == *cuda* ]] || [[ "$BUILD_ENVIRONMENT" == *gcc7.2* ]]) && which sccache > /dev/null; then
# gcc 7 with sccache seems to have intermittent OOM issue if all cores are used
if ([[ "$BUILD_ENVIRONMENT" == *cuda* ]] || [[ "$BUILD_ENVIRONMENT" == *gcc7* ]]) && which sccache > /dev/null; then
export MAX_JOBS=`expr $(nproc) - 1`
fi

# Target only our CI GPU machine's CUDA arch to speed up the build
export TORCH_CUDA_ARCH_LIST=5.2

if [[ "$BUILD_ENVIRONMENT" == *trusty-py3.6-gcc5.4* ]]; then
export DEBUG=1
fi

WERROR=1 python setup.py install

# Add the test binaries so that they won't be git clean'ed away
Expand Down
2 changes: 1 addition & 1 deletion .jenkins/pytorch/common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ else
fi

if [[ "$BUILD_ENVIRONMENT" == *pytorch-linux-xenial-cuda9-cudnn7-py3 ]] || \
[[ "$BUILD_ENVIRONMENT" == *pytorch-linux-trusty-py3.6-gcc7.2 ]]; then
[[ "$BUILD_ENVIRONMENT" == *pytorch-linux-trusty-py3.6-gcc7* ]]; then
BUILD_TEST_LIBTORCH=1
else
BUILD_TEST_LIBTORCH=0
Expand Down
4 changes: 4 additions & 0 deletions .jenkins/pytorch/enabled-configs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ pytorch-linux-xenial-cuda9-cudnn7-py2-build
pytorch-linux-xenial-cuda9-cudnn7-py2-test
pytorch-linux-xenial-cuda9-cudnn7-py3-build
pytorch-linux-xenial-cuda9-cudnn7-py3-test
pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc7-build
pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc7-test
pytorch-linux-xenial-py3-clang5-asan-build
pytorch-linux-xenial-py3-clang5-asan-test
pytorch-linux-trusty-py2.7.9-build
Expand All @@ -26,6 +28,8 @@ pytorch-linux-trusty-py3.6-gcc5.4-build
pytorch-linux-trusty-py3.6-gcc5.4-test
pytorch-linux-trusty-py3.6-gcc7.2-build
pytorch-linux-trusty-py3.6-gcc7.2-test
pytorch-linux-trusty-py3.6-gcc7-build
pytorch-linux-trusty-py3.6-gcc7-test
pytorch-linux-trusty-pynightly-build
pytorch-linux-trusty-pynightly-test
pytorch-win-ws2016-cuda9-cudnn7-py3-build
Expand Down
72 changes: 54 additions & 18 deletions .jenkins/pytorch/win-build.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
#!/bin/bash

# If you want to rebuild, run this with REBUILD=1
# If you want to build with CUDA, run this with USE_CUDA=1
# If you want to build without CUDA, run this with USE_CUDA=0

if [ ! -f setup.py ]; then
echo "ERROR: Please run this build script from PyTorch root directory."
exit 1
fi

COMPACT_JOB_NAME=pytorch-win-ws2016-cuda9-cudnn7-py3-build
source "$(dirname "${BASH_SOURCE[0]}")/common.sh"
Expand Down Expand Up @@ -34,12 +41,26 @@ cat >ci_scripts/build_pytorch.bat <<EOL
set PATH=C:\\Program Files\\CMake\\bin;C:\\Program Files\\7-Zip;C:\\curl-7.57.0-win64-mingw\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Amazon\\AWSCLI;%PATH%

:: Install MKL
if "%REBUILD%"=="" ( aws s3 cp s3://ossci-windows/mkl_2018.2.185.7z mkl.7z --quiet && 7z x -aoa mkl.7z -omkl )
if "%REBUILD%"=="" (
if "%BUILD_ENVIRONMENT%"=="" (
curl -k https://s3.amazonaws.com/ossci-windows/mkl_2018.2.185.7z --output mkl.7z
) else (
aws s3 cp s3://ossci-windows/mkl_2018.2.185.7z mkl.7z --quiet
)
7z x -aoa mkl.7z -omkl
)
set CMAKE_INCLUDE_PATH=%cd%\\mkl\\include
set LIB=%cd%\\mkl\\lib;%LIB

:: Install MAGMA
if "%REBUILD%"=="" ( aws s3 cp s3://ossci-windows/magma_cuda90_release_mkl_2018.2.185.7z magma_cuda90_release_mkl_2018.2.185.7z --quiet && 7z x -aoa magma_cuda90_release_mkl_2018.2.185.7z -omagma )
if "%REBUILD%"=="" (
if "%BUILD_ENVIRONMENT%"=="" (
curl -k https://s3.amazonaws.com/ossci-windows/magma_cuda90_release_mkl_2018.2.185.7z --output magma_cuda90_release_mkl_2018.2.185.7z
) else (
aws s3 cp s3://ossci-windows/magma_cuda90_release_mkl_2018.2.185.7z magma_cuda90_release_mkl_2018.2.185.7z --quiet
)
7z x -aoa magma_cuda90_release_mkl_2018.2.185.7z -omagma
)
set MAGMA_HOME=%cd%\\magma

:: Install sccache
Expand All @@ -49,15 +70,19 @@ if "%REBUILD%"=="" (
%CD%\\tmp_bin\\sccache.exe --show-stats || (
taskkill /im sccache.exe /f /t || ver > nul
del %CD%\\tmp_bin\\sccache.exe
aws s3 cp s3://ossci-windows/sccache.exe %CD%\\tmp_bin\\sccache.exe
if "%BUILD_ENVIRONMENT%"=="" (
curl -k https://s3.amazonaws.com/ossci-windows/sccache.exe --output %CD%\\tmp_bin\\sccache.exe
) else (
aws s3 cp s3://ossci-windows/sccache.exe %CD%\\tmp_bin\\sccache.exe
)
goto :check_sccache
)
)

:: Install Miniconda3
if "%REBUILD%"=="" (
IF EXIST C:\\Jenkins\\Miniconda3 ( rd /s /q C:\\Jenkins\\Miniconda3 )
curl https://repo.continuum.io/miniconda/Miniconda3-latest-Windows-x86_64.exe -O
curl -k https://repo.continuum.io/miniconda/Miniconda3-latest-Windows-x86_64.exe -O
.\Miniconda3-latest-Windows-x86_64.exe /InstallationType=JustMe /RegisterPython=0 /S /AddToPath=0 /D=C:\\Jenkins\\Miniconda3
)
call C:\\Jenkins\\Miniconda3\\Scripts\\activate.bat C:\\Jenkins\\Miniconda3
Expand Down Expand Up @@ -91,24 +116,35 @@ set DISTUTILS_USE_SDK=1

set CMAKE_GENERATOR=Ninja

if "%REBUILD%"=="" (
set NO_CUDA=1
python setup.py install
)
if errorlevel 1 exit /b 1
if not errorlevel 0 exit /b 1
if "%REBUILD%"=="" (
sccache --show-stats
sccache --zero-stats
rd /s /q C:\\Jenkins\\Miniconda3\\Lib\\site-packages\\torch
copy %CD%\\tmp_bin\\sccache.exe tmp_bin\\nvcc.exe
if not "%USE_CUDA%"=="1" (
if "%REBUILD%"=="" (
set NO_CUDA=1
python setup.py install
)
if errorlevel 1 exit /b 1
if not errorlevel 0 exit /b 1
)

set CUDA_NVCC_EXECUTABLE=%CD%\\tmp_bin\\nvcc
if not "%USE_CUDA%"=="0" (
if "%REBUILD%"=="" (
sccache --show-stats
sccache --zero-stats
rd /s /q C:\\Jenkins\\Miniconda3\\Lib\\site-packages\\torch
copy %CD%\\tmp_bin\\sccache.exe tmp_bin\\nvcc.exe
)

set CUDA_NVCC_EXECUTABLE=%CD%\\tmp_bin\\nvcc

if "%REBUILD%"=="" set NO_CUDA=
if "%REBUILD%"=="" set NO_CUDA=0

python setup.py install && sccache --show-stats && 7z a %IMAGE_COMMIT_TAG%.7z C:\\Jenkins\\Miniconda3\\Lib\\site-packages\\torch && python ci_scripts\\upload_image.py %IMAGE_COMMIT_TAG%.7z
python setup.py install && sccache --show-stats && (
if "%BUILD_ENVIRONMENT%"=="" (
echo "NOTE: To run `import torch`, please make sure to activate the conda environment by running `call C:\\Jenkins\\Miniconda3\\Scripts\\activate.bat C:\\Jenkins\\Miniconda3` in Command Prompt before running Git Bash."
) else (
7z a %IMAGE_COMMIT_TAG%.7z C:\\Jenkins\\Miniconda3\\Lib\\site-packages\\torch && python ci_scripts\\upload_image.py %IMAGE_COMMIT_TAG%.7z
)
)
)

EOL

Expand Down
25 changes: 0 additions & 25 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -138,31 +138,6 @@ cmake_dependent_option(
option(USE_DISTRIBUTED "Use THD (distributed)" OFF)
option(USE_DISTRIBUTED_MW "Use THD (distributed) master worker" OFF)

# Legacy options, which we will eventually remove
cmake_dependent_option(
WITH_CUDA "Legacy CUDA" ON
"USE_CUDA" OFF)
cmake_dependent_option(
NO_PYTHON "Legacy Python" OFF
"BUILD_PYTHON" ON)
cmake_dependent_option(
WITH_CUDNN "Legacy cuDNN" ON
"USE_CUDNN" OFF)
cmake_dependent_option(
WITH_NCCL "Legacy NCCL" ON
"USE_NCCL" OFF)
cmake_dependent_option(
NO_MKLDNN "Legacy no MKLDNN" OFF
"USE_MKLDNN" ON)
cmake_dependent_option(
WITH_DISTRIBUTED "Legacy THD (distributed)" ON
"USE_DISTRIBUTED" OFF)
cmake_dependent_option(
WITH_DISTRIBUTED_MW "Legacy THD (distributed) MW" ON
"USE_DISTRIBUTED_MW" OFF)
cmake_dependent_option(
WITH_GLOO_IBVERBS "Legacy Gloo IB verbs for distributed support" ON
"USE_GLOO_IBVERBS" OFF)
if (USE_ATEN)
set(BUILD_ATEN ${USE_ATEN})
endif()
Expand Down
4 changes: 3 additions & 1 deletion CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

/aten/ @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/torch/ @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/docs/source @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/docs/source @apaszke @soumith @colesbury @gchanan @zdevito @ezyang @ssnl @zou3519
/test @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/tools @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/README.md @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
Expand All @@ -20,3 +20,5 @@
/torch/csrc/distributed/ @apaszke @pietern @teng-li
/torch/distributed/ @apaszke @pietern @teng-li
/test/test_c10d.py @apaszke @pietern @teng-li
/third_party/ @orionr
/torch/utils/cpp_extension.py @goldsborough @fmassa @apaszke @soumith @ezyang
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,9 +141,8 @@ and with minimal abstractions.
You can write new neural network layers in Python using the torch API
[or your favorite NumPy-based libraries such as SciPy](http://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).

If you want to write your layers in C/C++, we provide an extension API based on
[cffi](http://cffi.readthedocs.io/en/latest/) that is efficient and with minimal boilerplate.
There is no wrapper code that needs to be written. You can see [a tutorial here](http://pytorch.org/tutorials/advanced/c_extension.html) and [an example here](https://github.com/pytorch/extension-ffi).
If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate.
There is no wrapper code that needs to be written. You can see [a tutorial here](http://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).


## Installation
Expand Down
15 changes: 1 addition & 14 deletions aten/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ if (CAFFE2_CMAKE_BUILDING_WITH_MAIN_REPO)
endif()
else()
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(ATen CXX C)
include(CMakeDependentOption)
option(USE_CUDA "Use CUDA" ON)
option(USE_ROCM "Use ROCm" OFF)
Expand All @@ -14,21 +15,10 @@ else()
"USE_CUDA" OFF)
option(ATEN_NO_TEST "Do not build ATen test binaries" OFF)

# Legacy options, which we will eventually remove
cmake_dependent_option(
WITH_CUDNN "Legacy cuDNN" ON
"USE_CUDNN" OFF)
cmake_dependent_option(
NO_MKLDNN "Legacy no MKLDNN" OFF
"USE_MKLDNN" ON)

# Flag for shared dependencies
set(BUILD_ATEN ON)
endif()

# Create the project in all cases
project(ATen CXX C)

# Find modules
list(APPEND CMAKE_MODULE_PATH
/usr/lib/x86_64-linux-gnu/
Expand All @@ -39,9 +29,6 @@ list(APPEND CMAKE_LIBRARY_PATH /usr/lib/x86_64-linux-gnu/)

cmake_policy(SET CMP0012 NEW)

# Polyfill for upstream FindCUDA
include(CMakeInitializeConfigs)

#############################################

set(ATen_CPU_SRCS)
Expand Down
Loading