Skip to content

Merge from upstream #197

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 249 commits into from
Sep 20, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
249 commits
Select commit Hold shift + click to select a range
2158f4a
add export import test to TestJitGenerated (#10982)
Sep 10, 2018
040d75d
Add option to use CUDA memory leak testing as a context manager (#11380)
zou3519 Sep 10, 2018
ce6906b
Narrowing Blob (#11167)
smessmer Sep 10, 2018
09292f2
Some improvements to IValue (#11238)
smessmer Sep 10, 2018
252f93d
Improve Tensor() constructor (#11258)
smessmer Sep 10, 2018
b0c1397
Fix intrusive_ptr move/copy for different NullType's (#11260)
smessmer Sep 10, 2018
198ade7
Remove manual refcounting from Tensor class (#11294)
smessmer Sep 10, 2018
ea0ee77
Fix katex math rendering (#11472)
ssnl Sep 10, 2018
18e5fd3
Normalize gradients before reduction in DistributedDataParallelC10d (…
Sep 10, 2018
35008e0
Add flags to fix half comparison and test (#11395)
goldsborough Sep 10, 2018
70d93f4
Check for maximum numel in NCCL broadcasting (#11466)
ssnl Sep 10, 2018
3e665cc
Improve support for tracing sizes, add more tracer warnings (#11288)
apaszke Sep 10, 2018
a0d4106
Integrate custom op tests with CI (#10611)
goldsborough Sep 10, 2018
0b78ae8
Cleanup byte swapping utilities to generate optimal code on the platf…
Sep 10, 2018
f2f43ad
Add new LengthsSplit operator (#10974)
oscarlight8 Sep 10, 2018
3ad67c6
Traceable explicit Variable instantiation (#11463)
Sep 10, 2018
3e49a69
Resolve ambiguity when including both caffe2 and aten registries (#11…
bwasti Sep 11, 2018
e1e6944
Lockdown NO_TEST=1 for tests even more (#11415)
orionr Sep 11, 2018
a175282
Flags for LMDB, LevelDB, and Caffe2 ops (#11462)
orionr Sep 11, 2018
9cfdf0d
Document the Embedding module (#11469)
goldsborough Sep 11, 2018
dd8defe
Document the Functional module (#11460)
goldsborough Sep 11, 2018
f9d12ee
Give copy an optional device argument.
gchanan Sep 11, 2018
b14a805
Ignore functional doc error
goldsborough Sep 11, 2018
0988bba
C10d release to torch.distributed for PT1 (#11405)
teng-li Sep 11, 2018
3d5fd12
Documentation for c10d: torch.distributed and deprecate the old distr…
teng-li Sep 11, 2018
f84693e
nomnigraph - Improvements to subgraph matching APIs (#11418)
duc0 Sep 11, 2018
0ddbe66
Improve shape analysis to cover all most commonly used ops (#11358)
apaszke Sep 11, 2018
120d769
Add support for tracing strings (#11506)
apaszke Sep 11, 2018
80fa8e1
Add .expand() method to distribution classes (#11341)
neerajprad Sep 11, 2018
86ab92b
Move TensorImpl / UndefinedTensor(Impl) to core (#11441)
gchanan Sep 11, 2018
de460c7
Improvements on conv/pool/fold/stft/ParamDict docs (#11106)
ssnl Sep 11, 2018
3185016
Remove separate ATen build target (#11488)
orionr Sep 11, 2018
4e8d9a4
Introducing python setup.py rebuild develop (#11487)
soumith Sep 11, 2018
deac304
Bugfix for basic slicing
Sep 11, 2018
d32b410
Copy protos on install same as develop (#11517)
orionr Sep 11, 2018
01c7542
Use -isystem for system includes in C++ extensions (#11459)
goldsborough Sep 11, 2018
f80f158
Get rid of manual dispatch on Type. (#11486)
ezyang Sep 11, 2018
727a445
New Serialization Proto
houseroad Sep 11, 2018
9800d0e
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 11, 2018
d09041b
Add an option to statically link cuda (#10596)
sf-wind Sep 11, 2018
a566bc2
Disable all CircleCI jobs (#11523)
Sep 11, 2018
781737f
Remove time prefix from rsync (#11525)
apaszke Sep 11, 2018
fbc1732
Update pybind11 to fix Python 3.7 support for script (#11473)
Sep 11, 2018
5952acc
Add "merge to master" step before build in CircleCI (#11443)
Sep 11, 2018
c56a7cf
More use of AT_CHECK and AT_ERROR (#11457)
vishwakftw Sep 11, 2018
c1dce21
Cuda TensorAccessor (#11373)
t-vi Sep 11, 2018
4db21a1
Optimize LengthsTileOp on GPU to run a kernel instead of a sequence o…
wesolwsk Sep 11, 2018
17776db
Add gtest dependency on aten tests. (#11429)
Yangqing Sep 11, 2018
289a8c9
Allow train/eval, and non-Tensor arguments to python functions (#11505)
zdevito Sep 11, 2018
3a8e39b
Support load and store between Py_complex and std::complex (#11493)
Sep 11, 2018
3a39006
Fix some more doc
ssnl Sep 11, 2018
b6b0b52
fix missing libnccl.so.1 error (#11553)
soumith Sep 12, 2018
8b196d6
Allow tracing random functions (only when using default generators) (…
apaszke Sep 12, 2018
cda74ac
fix nested no_grad decorator and with-statement (#11479)
weiyangfb Sep 12, 2018
bbf54ea
Ensure .enumerate_support() methods are jittable (#11542)
fritzo Sep 12, 2018
35d52db
re-enable USE_MPI (#11416)
Yangqing Sep 12, 2018
92fd69f
Split Type into TypeExtendedInterface and Type (#11520)
ezyang Sep 12, 2018
3121c8f
Update gtest and remove the macro guide on gtest from #11321 (#11417)
Yangqing Sep 12, 2018
d95fedb
Use ATen dropout implementation in Dropout module and add FeatureDrop…
goldsborough Sep 12, 2018
045f862
Use torch::nn::init::xavier_normal_
goldsborough Sep 12, 2018
54107ae
convert output_device at data_parallel from torch.device to index (#1…
weiyangfb Sep 12, 2018
35348da
WIP: Include note on cudnn determinism in each function backed by cud…
Sep 12, 2018
f4d9f39
Test libtorch on cuda
goldsborough Sep 12, 2018
b75c32d
link against TORCH_CUDA_LIBRARIES
anderspapitto Sep 12, 2018
8aa8ad8
WIP: Reproducibility note (#11329)
Sep 12, 2018
a11ebfa
Add explicit "this->" for nvcc. (#11196)
xkszltl Sep 12, 2018
1a246c9
guard spurious cudnn.h include (#11562)
soumith Sep 12, 2018
a00fa2c
Release GIL when calling into JIT interpreter
apaszke Sep 12, 2018
62c9d4a
Make .to() methods native functions (to fix JIT tracing)
apaszke Sep 12, 2018
90e31f4
Improve tracer warnings (#11545)
apaszke Sep 12, 2018
6dcdbd3
Make C10d support CPU only build (#11513)
teng-li Sep 12, 2018
3e3d8ca
Allow setting deletion constant
goldsborough Sep 12, 2018
6597779
Clean up some C++ cruftiness in the script lexer.
Sep 12, 2018
76070fe
Make c10d test work on CPU only build (#11567)
teng-li Sep 12, 2018
efc0f67
Move some bmm/baddbmm to ATen (#11292)
t-vi Sep 12, 2018
6fc18a7
Typo fix in randomness.rst (#11571)
Sep 12, 2018
f0a2845
Document BatchNorm and update default behavior (#11484)
goldsborough Sep 12, 2018
e5dd77c
Sync all libnccl soversions, not just libnccl.so.1 (#11575)
ezyang Sep 12, 2018
12f4c46
caffe2::StorageImpl use at::DataPtr (#11282)
cpuhrsch Sep 12, 2018
8845b53
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 12, 2018
6398d62
Warn that export+import module always load onto the CPU (#11485)
zou3519 Sep 12, 2018
23d5588
minor formatting error log (#11528)
Wakeupbuddy Sep 12, 2018
13b05c8
Add EndToEndHybridModel CUDA tests (#11544)
zou3519 Sep 12, 2018
17e76e2
Add trigonometry functions to docs/source/onnx.rst
zasdfgbnm Sep 12, 2018
ad7936e
Fix reloading modules back into python (#11552)
zdevito Sep 12, 2018
739e6af
Add reminder % to the jit
wanchaol Sep 12, 2018
9a7c196
Move Type, Tensor, TensorMethods to core.
gchanan Sep 12, 2018
a3036b3
Fused weightnorm for ATen (#10842)
definitelynotmcarilli Sep 12, 2018
504126e
Documentation for debugging JIT
Sep 12, 2018
f0a4400
Explicitly set locale on docs build. (#11595)
ezyang Sep 12, 2018
958ba4e
Aibench for asr decoder
lly-zero-one Sep 12, 2018
d4e05f4
Move function deletion from the stack to the heap. (#11534)
Sep 12, 2018
02c4cd3
Skip flaky distributed tests (#11594)
ssnl Sep 12, 2018
b663b7c
Update ROCm Docker image with latest AMD debians (#11507)
ezyang Sep 12, 2018
ac94889
Add jit doc entry to sidebar (#11598)
ssnl Sep 12, 2018
c81406c
Document Any (#11580)
goldsborough Sep 12, 2018
eb7a298
Add resnext model to OSS (#11468)
xw285cornell Sep 12, 2018
316c167
Add checking of nullptrs in GetTensorInfo (#11587)
Sep 12, 2018
12efef1
Split out copy_op from utility_ops (#11470)
3l1 Sep 12, 2018
130d55a
Allow building the C++ API without cereal (#11498)
goldsborough Sep 12, 2018
5b2efcf
Document the Conv module (#11566)
goldsborough Sep 12, 2018
def44c9
Revert D9779866: [pytorch][PR] Move function deletion from the stack …
ezyang Sep 12, 2018
776a999
topk test fix, hgemm integration (#11593)
iotamudelta Sep 12, 2018
7f7cda9
Optimize order_swich_ops on GPU (#11404)
xiaomengy Sep 12, 2018
e2cd627
Temporarily disable docs build. (#11608)
ezyang Sep 13, 2018
daa379f
Disable flaky test ObserverTest.TestMultipleNetBase (#11596)
ezyang Sep 13, 2018
f00f99e
use at::Half in THC (#11322)
Sep 13, 2018
5da0b31
More native docs on TensorOptions. (#11558)
ezyang Sep 13, 2018
0a6931c
Only reference ONNX through onnx_pb.h (#11609)
orionr Sep 13, 2018
17637f2
enable_mkl support for resnet18+lstm model
tbpangolin Sep 13, 2018
6f05b5e
Pin Sphinx to 1.7.9 (#11620)
ezyang Sep 13, 2018
e998038
Use TypeMeta instead of TypeIdentifier within at::StorageImpl (#11236)
cpuhrsch Sep 13, 2018
44b2b6b
clean up jit generated tests (#11403)
wanchaol Sep 13, 2018
cac11a4
Merge caffe2::/at::StorageImpl (#11543)
cpuhrsch Sep 13, 2018
77f6998
Guard against inputting or returning sparse tensors (#11550)
Sep 13, 2018
36fc1a0
Merge caffe2::/at::Storage
cpuhrsch Sep 13, 2018
57f149a
Only join pin_memory_thread after it started (#11599)
ssnl Sep 13, 2018
d4d72b8
Sphinx is case sensitive
ssnl Sep 13, 2018
1f49b87
Add missing include for __half (#11638)
ezyang Sep 13, 2018
d278344
Automatic update of fbcode/onnx to 39dd0d4fec5913aa517b71bcfcbf638a42…
houseroad Sep 13, 2018
a861573
fix tensor export bug in IR export (#11613)
Sep 13, 2018
5bc90b8
support conversion and dispatch of complex numbers (#11603)
Sep 13, 2018
ab3a2d2
Improve error messages when trying to use nested lists.
zdevito Sep 13, 2018
6f53b4e
Remove implicit bool casts (#11503)
Sep 13, 2018
9abc666
stop allowing extra positional args in arg parser (#10499)
Sep 13, 2018
45e9ee0
Fix test_mnist_training_leaks_no_memory_cuda warning (#11639)
zou3519 Sep 13, 2018
912d362
Split tensor.h into tensor_impl.h and tensor.h (#11642)
ezyang Sep 13, 2018
75f49be
move instance_norm to aten (#10792)
Sep 13, 2018
acb6f18
fix generate_code.py caching (#11644)
soumith Sep 13, 2018
0f1ca56
End-to-end dynamic slicing with ONNX DynamicSlice experimental operat…
Sep 13, 2018
9053728
Constexpr std::move / std::forward for C++11 (#11396)
smessmer Sep 13, 2018
e2aea62
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 13, 2018
4b5e0e4
Merge branch 'master' into ifu
iotamudelta Sep 13, 2018
f129da1
Add max to the ValueError for EmbeddingBag mode check (#11655)
zippeurfou Sep 13, 2018
29e29ca
Use MPI_Isend/MPI_Irecv to back send/recv (#11630)
pietern Sep 13, 2018
05e06f7
migrating deprecated calls without abc module for containers (#11515)
jeffreyksmithjr Sep 13, 2018
4672280
Pass Storage by value (#11546)
smessmer Sep 13, 2018
85ff723
Only involve tensor device in CUDA -> CPU copy, not current device. (…
gchanan Sep 13, 2018
8402fde
Revert D9778043: Pass Storage by value
ezyang Sep 13, 2018
c185104
Reduce includes in tensor_impl.h (#11643)
ezyang Sep 13, 2018
7607b49
s/GetDevicetype/device_type/ (#11656)
ezyang Sep 13, 2018
02980d7
Refactor Tensor/TensorImpl constructors. (#11657)
ezyang Sep 13, 2018
e1cd220
Reimplement swap() using default move constructor. (#11659)
ezyang Sep 13, 2018
7606793
Move Pixel Shuffle to ATen (#9721)
ssnl Sep 14, 2018
513fd3d
Improve doc of `torch.nn.functional.pad` (#11623)
zasdfgbnm Sep 14, 2018
98e04db
Implement requires_grad propagation in the JIT (#11586)
apaszke Sep 14, 2018
99c0b96
optimize norm on ATen CPU backend (#11565)
xhzhao Sep 14, 2018
2431eac
Ensure most Distribution methods are jittable (#11560)
fritzo Sep 14, 2018
e6fe8d9
Try to delete codeowners for ATen/core (#10693)
ezyang Sep 14, 2018
1637729
Fix ci by skipping some tests (#11668)
zrphercule Sep 14, 2018
c5f7da3
Support FP16 sparse lookup (#11674)
chocjy Sep 14, 2018
19065f9
Centralize TypeExtendedInterface casts. (#11576)
ezyang Sep 14, 2018
74197c7
Restore support for dim=None on WeightNorm. (#11661)
ezyang Sep 14, 2018
c391c20
Adding .expand method for TransformedDistribution (#11607)
neerajprad Sep 14, 2018
cda71e2
Disallow scalar parameters in Dirichlet and Categorical (#11589)
neerajprad Sep 14, 2018
9feac15
Merge branch 'master' into ifu
iotamudelta Sep 14, 2018
cdb9eb1
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 14, 2018
6c3792b
Implement UndefinedType::typeMeta.
gchanan Sep 14, 2018
2631da0
Move some Tensor method definitions from Type.h to TensorMethods.h. (…
gchanan Sep 14, 2018
9b7ceac
Merge branch 'master' into ifu
iotamudelta Sep 14, 2018
72822ee
Fix #11430 (CPU only builds raise opaque error message when calling .…
ezyang Sep 14, 2018
0d9b910
Fix gesv and gels docs (#11699)
vishwakftw Sep 14, 2018
eb039dc
Add CHECKs into GetTensorInfo and ExtractDeviceOption (#11597)
salexspb Sep 14, 2018
115b13f
clean up some old Half stuff
Sep 14, 2018
278e304
Implement elif in string frontend (#11667)
Sep 14, 2018
3258fc1
Delete torch/csrc/api/README.md (#11703)
goldsborough Sep 14, 2018
7535d98
Add message tag parameter to send/recv
pietern Sep 14, 2018
b90872c
Get rid of default arguments for TH/THC factory functions. (#11673)
gchanan Sep 14, 2018
4050770
Skip tests that depend on double datatype for MIOpen and in absence of
iotamudelta Sep 14, 2018
3776559
Merge branch 'ifu' of github.com:iotamudelta/pytorch into ifu
iotamudelta Sep 14, 2018
0c26488
Augment emit_nvtx to help connect backward-pass Function apply calls …
definitelynotmcarilli Sep 14, 2018
224e62b
respect USE_CUDA_STATIC_LINK in build_libtorch.py
anderspapitto Sep 14, 2018
70e68e7
Casting for binary ops (#11708)
Sep 14, 2018
96d3f96
Splits CPU and CUDA fusion compilers (#10981)
mruberry Sep 14, 2018
8258803
Merge branch 'master' into ifu
iotamudelta Sep 14, 2018
8e3f8c5
Document the Sequential module (#11648)
goldsborough Sep 14, 2018
d24bcfd
Suppress hiprand "duplicate-decl-specifier" warning (#11698)
bddppq Sep 14, 2018
8e76dcf
Prevent raising KeyboardInterrupt in worker (#11718)
ssnl Sep 14, 2018
2c8a1b9
Back out "Refactor Tensor/TensorImpl constructors."
ezyang Sep 14, 2018
f4d9fe3
Remove intrusive_ptr::reclaim() in Storage (#11352)
smessmer Sep 14, 2018
270fb22
Remove intrusive_ptr::reclaim() in Storage (2/2) (#11547)
smessmer Sep 14, 2018
690c999
Simplify union payload copying (#11353)
smessmer Sep 14, 2018
bb6f18c
Simplify IValue::toTensor() (#11355)
smessmer Sep 14, 2018
f09054f
Remove deprecate warning for Upsampling (#11568)
Sep 15, 2018
eb3c47b
max -> fmaxf in cross_entropy kernel (#11733)
rohithkrn Sep 16, 2018
b3e7260
Do not use FixedDivisor in ROCM order switch op (#11697)
bddppq Sep 16, 2018
ca6f08f
Set correct dtype for fp16 op inference function (#11693)
chocjy Sep 16, 2018
10c29c8
Fix CUDA 8 build on Windows (#11729)
peterjc123 Sep 16, 2018
6f6b035
Vectorize grid sample 2d CPU kernels (#10980)
ssnl Sep 17, 2018
f5bc2ae
Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0) (#11…
Sep 17, 2018
d63bb72
Remove symbol export annotations in THC/generic/*.cu (#11367)
peterjc123 Sep 17, 2018
a8b1755
Check device argument makes sense for legacy tensor constructors. (#1…
gchanan Sep 17, 2018
5bfd8f5
Moving copy of Caffe2 protos back to build_pytorch_libs.sh (#11726)
pjh5 Sep 17, 2018
0d345cf
Remove Type method defaults in ATen. (#11675)
gchanan Sep 17, 2018
35518b3
Back out "Back out "Refactor Tensor/TensorImpl constructors."" E2: Co…
ezyang Sep 17, 2018
2baba7f
Add storage_offset to Caffe2 (#11701)
ezyang Sep 17, 2018
6660a12
Cache and use TypeMeta in TensorImpl (#11706)
ezyang Sep 17, 2018
f6a6d7f
Switch at::TensorImpl to store TypeMeta rather than ScalarType
ezyang Sep 17, 2018
07fd445
Revert D9831398: [pytorch][PR] Update OpenMP cmake setting for xcode …
ezyang Sep 17, 2018
a7e3cd0
Fix ctc gradient handling (#11753)
t-vi Sep 17, 2018
7949250
Fixes for Torch Script C++ API (#11682)
goldsborough Sep 17, 2018
cdefc27
Support lr adaption for SparseAdam and RowWiseSparseAdam (#11162)
Sep 17, 2018
e125e61
Fix flake8
gchanan Sep 17, 2018
39520ff
remove Type/Tensor/TensorMethods include order dependencies. (#11720)
gchanan Sep 17, 2018
47d65ed
Fix issue 10492 (#11634)
vishwakftw Sep 17, 2018
73738ec
bump version to 1.0 (#11717)
soumith Sep 17, 2018
38c2c14
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 17, 2018
db1bf5b
Merge branch 'ifu' of github.com:iotamudelta/pytorch into ifu
iotamudelta Sep 17, 2018
336323f
return aten::gt to the list of fusable operations, add expected graph…
Sep 17, 2018
2961062
64B align for avx512 (#11748)
jspark1105 Sep 17, 2018
7671f4a
Add `math` to scope when using inf in tests (#11302)
Sep 17, 2018
7df6650
Fix empty embedding bag on cuda (#11740)
ssnl Sep 17, 2018
3ce17bf
Generate ATen/core to source if env GEN_TO_SOURCE is set. (#11759)
gchanan Sep 17, 2018
ca5def1
Expose annotations (#11649)
bwasti Sep 17, 2018
3819d25
Clean up converter and accept less-valid networks
bwasti Sep 18, 2018
7d0657f
Migrate test in cpp/api/ to use gtest (#11556)
zrphercule Sep 18, 2018
24a8c13
Add barrier to fix distributed test flakiness (#11775)
pietern Sep 18, 2018
d4dde0b
Detect number of amd gpus in ROCM CI (#11771)
bddppq Sep 18, 2018
e8ecbcd
Move IValue to ATen/core (#11610)
bwasti Sep 18, 2018
7f0dd24
Move AT_HOST_DEVICE macro to Macros.h (#10945)
colesbury Sep 18, 2018
63e384a
SNNTest with Data Preproc Service (#11707)
tianshub Sep 18, 2018
a7cbcb1
Enable build_python on windows (#11385)
mingzhe09088 Sep 18, 2018
3cbec54
Reorder statements for readability (#11764)
pietern Sep 18, 2018
a02685e
Fix test_torch's test_potri (#11770)
t-vi Sep 18, 2018
bd43d64
Add strides to Tensor (#11763)
cpuhrsch Sep 18, 2018
63c811b
Include some JIT things in C++ docs (#11712)
goldsborough Sep 18, 2018
407a9fe
make copy constructed tensor a leaf variable when using torch.tensor(…
weiyangfb Sep 18, 2018
e734c94
Quick update to embedding_bag doc (#11784)
zippeurfou Sep 18, 2018
91b6458
Container __getitem__ slicing for subclasses (#11694)
nehz Sep 18, 2018
e2bc95e
add `ModuleList.insert` (#11664)
zuoxingdong Sep 18, 2018
4ee0a78
varargs for meshgrid (#11600)
xwfye Sep 18, 2018
e00fb69
Use CATCH prefix to avoid name conflicts with Caffe2.
gchanan Sep 18, 2018
c8fbeb3
Add empty tensor tests to test_sparse (#11228)
Sep 18, 2018
6073f30
Document torch::nn::init (#11778)
goldsborough Sep 18, 2018
98aebed
Refactor tests part 1 (#11350)
ajyu Sep 18, 2018
2732c8b
improve aten/convolution error message (#11768)
soumith Sep 18, 2018
540ef9b
Add distributed get_backend (#11715)
ssnl Sep 18, 2018
47956dd
Revert D9755189: [pytorch][PR] [API CHANGE] Add empty tensor tests to…
Sep 18, 2018
9eb7288
Add successor/predecessor functions
bwasti Sep 18, 2018
1d399a8
Handle pollution of MAX, MIN and CHECK macros. (#11805)
ezyang Sep 18, 2018
7d25fa3
Emit Undefined type for value when it is Dynamic type (#11810)
Sep 18, 2018
d4e1fa4
allow no-alpha add/sub in onnx symbolic (#10972)
wanchaol Sep 18, 2018
8ad846f
Don't build Detectron ops with NO_CAFFE2_OPS=1 (#11799)
orionr Sep 18, 2018
e585f2f
Polish CPP docs, Minor Python Docs Fixes (#11722)
svenevs Sep 18, 2018
53cf628
Simplify Blob move constructor/assignment (#11402)
smessmer Sep 18, 2018
91c9357
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 18, 2018
846a573
New failure on CI.
iotamudelta Sep 19, 2018
489f783
Test fails now.
iotamudelta Sep 19, 2018
70af48f
Skip for now.
iotamudelta Sep 19, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
233 changes: 139 additions & 94 deletions .circleci/config.yml

Large diffs are not rendered by default.

9 changes: 6 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -25,16 +25,17 @@ aten/src/ATen/cuda/CUDAConfig.h
build/
dist/
docs/src/**/*
docs/cpp/xml/
docs/cpp/html/
docs/cpp/api/
docs/cpp/build
docs/cpp/source/api
test/.coverage
test/cpp/api/mnist
test/custom_operator/model.pt
test/data/gpu_tensors.pt
test/data/legacy_modules.t7
test/data/legacy_serialized.pt
test/data/linear.pt
test/htmlcov
test/cpp_extensions/install/
third_party/build/
tools/shared/_utils_internal.py
torch.egg-info/
Expand All @@ -43,6 +44,7 @@ torch/csrc/cudnn/cuDNN.cpp
torch/csrc/generated
torch/csrc/generic/TensorMethods.cpp
torch/csrc/jit/generated/*
torch/csrc/jit/fusers/Config.h
torch/csrc/nn/THCUNN.cpp
torch/csrc/nn/THCUNN.cwrap
torch/csrc/nn/THNN_generic.cpp
Expand All @@ -65,6 +67,7 @@ torch/lib/protoc
torch/lib/tmp_install
torch/lib/torch_shm_manager
torch/lib/python*
torch/share/
torch/version.py

# IPython notebook checkpoints
Expand Down
2 changes: 1 addition & 1 deletion .jenkins/caffe2/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ else
export MAX_JOBS=`expr $(nproc) - 1`
fi

USE_OPENCV=1 BUILD_BINARY=1 python setup.py install --user
USE_LEVELDB=1 USE_LMDB=1 USE_OPENCV=1 BUILD_BINARY=1 python setup.py install --user

# This is to save test binaries for testing
cp -r torch/lib/tmp_install $INSTALL_PREFIX
Expand Down
43 changes: 29 additions & 14 deletions .jenkins/caffe2/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,20 @@ fi

mkdir -p $TEST_DIR/{cpp,python}

if [[ $BUILD_ENVIRONMENT == *-rocm* ]]; then
export LANG=C.UTF-8
export LC_ALL=C.UTF-8

# Pin individual runs to specific gpu so that we can schedule
# multiple jobs on machines that have multi-gpu.
NUM_AMD_GPUS=$(/opt/rocm/bin/rocminfo | grep 'Device Type.*GPU' | wc -l)
if (( $NUM_AMD_GPUS == 0 )); then
echo >&2 "No AMD GPU detected!"
exit 1
fi
export HIP_VISIBLE_DEVICES=$(($BUILD_NUMBER % $NUM_AMD_GPUS))
fi

cd "${WORKSPACE}"

# C++ tests
Expand All @@ -62,19 +76,27 @@ for test in $(find "${INSTALL_PREFIX}/test" -executable -type f); do
*/mkl_utils_test|*/aten/integer_divider_test)
continue
;;
*/aten/*)
# ATen uses test framework Catch2
# NB: We do NOT use the xml test reporter, because
# Catch doesn't support multiple reporters
*/scalar_tensor_test|*/basic|*/native_test)
if [[ "$BUILD_ENVIRONMENT" == *rocm* ]]; then
continue
else
"$test"
fi
;;
*)
# Currently, we use a mixture of gtest (caffe2) and Catch2 (ATen). While
# planning to migrate to gtest as the common PyTorch c++ test suite, we
# currently do NOT use the xml test reporter, because Catch doesn't
# support multiple reporters
# c.f. https://github.com/catchorg/Catch2/blob/master/docs/release-notes.md#223
# which means that enabling XML output means you lose useful stdout
# output for Jenkins. It's more important to have useful console
# output than it is to have XML output for Jenkins.
# Note: in the future, if we want to use xml test reporter once we switch
# to all gtest, one can simply do:
# "$test" --gtest_output=xml:"$gtest_reports_dir/$(basename $test).xml"
"$test"
;;
*)
"$test" --gtest_output=xml:"$gtest_reports_dir/$(basename $test).xml"
;;
esac
done

Expand All @@ -98,9 +120,6 @@ fi

rocm_ignore_test=()
if [[ $BUILD_ENVIRONMENT == *-rocm* ]]; then
export LANG=C.UTF-8
export LC_ALL=C.UTF-8

# Currently these tests are failing on ROCM platform:

# Unknown reasons, need to debug
Expand All @@ -115,10 +134,6 @@ if [[ $BUILD_ENVIRONMENT == *-rocm* ]]; then
# Our cuda top_k op has some asm code, the hipified version doesn't
# compile yet, so we don't have top_k operator for now
rocm_ignore_test+=("--ignore $CAFFE2_PYPATH/python/operator_test/top_k_test.py")

# Our AMD CI boxes have 4 gpus on each
# Remove this once we have added multi-gpu support
export HIP_VISIBLE_DEVICES=$(($BUILD_NUMBER % 4))
fi

# Python tests
Expand Down
20 changes: 12 additions & 8 deletions .jenkins/pytorch/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-* ]]; then
sudo apt-get install -y --allow-downgrades --allow-change-held-packages libnccl-dev=2.2.13-1+cuda9.0 libnccl2=2.2.13-1+cuda9.0
fi

if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda8-* ]] || [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-cudnn7-py2* ]]; then
if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda8-* ]] || [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-cudnn7-py2* ]] || [[ "$BUILD_ENVIRONMENT" == *-trusty-py2.7.9* ]]; then
# TODO: move this to Docker
sudo apt-get update
sudo apt-get install -y --allow-downgrades --allow-change-held-packages openmpi-bin libopenmpi-dev
Expand Down Expand Up @@ -102,12 +102,6 @@ fi
# Add the test binaries so that they won't be git clean'ed away
git add -f build/bin

# Testing ATen install
if [[ "$BUILD_ENVIRONMENT" != *cuda* ]]; then
echo "Testing ATen install"
time tools/test_aten_install.sh
fi

# Test C FFI plugins
# cffi install doesn't work for Python 3.7
if [[ "$BUILD_ENVIRONMENT" != *pynightly* ]]; then
Expand All @@ -124,7 +118,7 @@ if [[ "$BUILD_ENVIRONMENT" == *xenial-cuda8-cudnn6-py3* ]]; then
pushd docs
# TODO: Don't run this here
pip install -r requirements.txt || true
make html
LC_ALL=C make html
popd
fi

Expand All @@ -138,4 +132,14 @@ if [[ "$BUILD_TEST_LIBTORCH" == "1" ]]; then
pushd ../cpp-build/caffe2
WERROR=1 VERBOSE=1 DEBUG=1 python $BUILD_LIBTORCH_PY
popd

# Build custom operator tests.
CUSTOM_OP_BUILD="$PWD/../custom-op-build"
CUSTOM_OP_TEST="$PWD/test/custom_operator"
SITE_PACKAGES="$(python -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib())')"
mkdir "$CUSTOM_OP_BUILD"
pushd "$CUSTOM_OP_BUILD"
CMAKE_PREFIX_PATH="$SITE_PACKAGES/torch" cmake "$CUSTOM_OP_TEST"
make VERBOSE=1
popd
fi
3 changes: 2 additions & 1 deletion .jenkins/pytorch/common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,8 @@ else
exit 1
fi

if [[ "$BUILD_ENVIRONMENT" == *pytorch-linux-trusty-py3.6-gcc7* ]]; then
if [[ "$BUILD_ENVIRONMENT" == *pytorch-linux-xenial-cuda9-cudnn7-py3 ]] || \
[[ "$BUILD_ENVIRONMENT" == *pytorch-linux-trusty-py3.6-gcc7* ]]; then
BUILD_TEST_LIBTORCH=1
else
BUILD_TEST_LIBTORCH=0
Expand Down
22 changes: 22 additions & 0 deletions .jenkins/pytorch/macos-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -78,13 +78,35 @@ test_cpp_api() {
"$CPP_BUILD"/caffe2/bin/test_api
}

test_custom_script_ops() {
echo "Testing custom script operators"
pushd test/custom_operator
# Build the custom operator library.
rm -rf build && mkdir build
pushd build
SITE_PACKAGES="$(python -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib())')"
CMAKE_PREFIX_PATH="$SITE_PACKAGES/torch" cmake ..
make VERBOSE=1
popd

# Run tests Python-side and export a script module.
python test_custom_ops.py -v
python model.py --export-script-module=model.pt
# Run tests C++-side and load the exported script module.
build/test_custom_ops ./model.pt
popd
}


if [ -z "${JOB_BASE_NAME}" ] || [[ "${JOB_BASE_NAME}" == *-test ]]; then
test_python_all
test_cpp_api
test_custom_script_ops
else
if [[ "${JOB_BASE_NAME}" == *-test1 ]]; then
test_python_all
elif [[ "${JOB_BASE_NAME}" == *-test2 ]]; then
test_cpp_api
test_custom_script_ops
fi
fi
21 changes: 20 additions & 1 deletion .jenkins/pytorch/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -90,14 +90,16 @@ test_python_all_except_nn() {

test_aten() {
# Test ATen
# The following test(s) of ATen have already been skipped by caffe2 in rocm environment:
# scalar_tensor_test, basic, native_test
if ([[ "$BUILD_ENVIRONMENT" != *asan* ]] && [[ "$BUILD_ENVIRONMENT" != *rocm* ]]); then
echo "Running ATen tests with pytorch lib"
TORCH_LIB_PATH=$(python -c "import site; print(site.getsitepackages()[0])")/torch/lib
# NB: the ATen test binaries don't have RPATH set, so it's necessary to
# put the dynamic libraries somewhere were the dynamic linker can find them.
# This is a bit of a hack.
if [[ "$BUILD_ENVIRONMENT" == *ppc64le* ]]; then
SUDO=sudo
SUDO=sudo
fi

${SUDO} ln -s "$TORCH_LIB_PATH"/libcaffe2* build/bin
Expand Down Expand Up @@ -140,12 +142,28 @@ test_libtorch() {
fi
}

test_custom_script_ops() {
if [[ "$BUILD_TEST_LIBTORCH" == "1" ]]; then
echo "Testing custom script operators"
CUSTOM_OP_BUILD="$PWD/../custom-op-build"
pushd test/custom_operator
cp -r "$CUSTOM_OP_BUILD" build
# Run tests Python-side and export a script module.
python test_custom_ops.py -v
python model.py --export-script-module=model.pt
# Run tests C++-side and load the exported script module.
build/test_custom_ops ./model.pt
popd
fi
}

if [ -z "${JOB_BASE_NAME}" ] || [[ "${JOB_BASE_NAME}" == *-test ]]; then
test_python_nn
test_python_all_except_nn
test_aten
test_torchvision
test_libtorch
test_custom_script_ops
else
if [[ "${JOB_BASE_NAME}" == *-test1 ]]; then
test_python_nn
Expand All @@ -154,5 +172,6 @@ else
test_aten
test_torchvision
test_libtorch
test_custom_script_ops
fi
fi
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ matrix:
script: mypy @mypy-files.txt
- env: CPP_DOC_CHECK
install: sudo apt-get install -y doxygen
script: cd docs/cpp && ./check-doxygen.sh
script: cd docs/cpp/source && ./check-doxygen.sh
4 changes: 3 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ option(BUILD_BINARY "Build C++ binaries" OFF)
option(BUILD_DOCS "Build Caffe2 documentation" OFF)
option(BUILD_CUSTOM_PROTOBUF "Build and use Caffe2's own protobuf under third_party" ON)
option(BUILD_PYTHON "Build Python binaries" ON)
option(BUILD_CAFFE2_OPS "Build Caffe2 operators" ON)
option(BUILD_SHARED_LIBS "Build libcaffe2.so" ON)
cmake_dependent_option(
CAFFE2_LINK_LOCAL_PROTOBUF "If set, build protobuf inside libcaffe2.so." ON
Expand Down Expand Up @@ -115,14 +116,15 @@ option(USE_IDEEP "Use IDEEP interface in MKL BLAS" ON)
option(USE_MKLML "Use MKLML interface in MKL BLAS" ON)
option(USE_DISTRIBUTED "Use distributed" ON)
cmake_dependent_option(
USE_MPI "Use MPI for Caffe2. Only available if USE_DISTRIBUTED is on." OFF
USE_MPI "Use MPI for Caffe2. Only available if USE_DISTRIBUTED is on." ON
"USE_DISTRIBUTED" OFF)
cmake_dependent_option(
USE_GLOO "Use Gloo. Only available if USE_DISTRIBUTED is on." ON
"USE_DISTRIBUTED" OFF)
cmake_dependent_option(
USE_GLOO_IBVERBS "Use Gloo IB verbs for distributed. Only available if USE_GLOO is on." OFF
"USE_GLOO" OFF)
option(TORCH_USE_CEREAL "Build the C++ API with Cereal for serialization support" OFF)

# Used when building Caffe2 through setup.py
option(BUILDING_WITH_TORCH_LIBS "Tell cmake if Caffe2 is being built alongside torch libs" OFF)
Expand Down
1 change: 1 addition & 0 deletions CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
# Each line is a file pattern followed by one or more owners.

/aten/ @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/aten/src/ATen/core/
/torch/ @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
/docs/source @apaszke @soumith @colesbury @gchanan @zdevito @ezyang @ssnl @zou3519
/docs/cpp @goldsborough @ebetica @apaszke @soumith @colesbury @gchanan @zdevito @ezyang
Expand Down
18 changes: 15 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ We are in an early-release beta. Expect some adventures and rough edges.
- [Binaries](#binaries)
- [From Source](#from-source)
- [Docker Image](#docker-image)
- [Building the Documentation](#building-the-documentation)
- [Previous Versions](#previous-versions)
- [Getting Started](#getting-started)
- [Communication](#communication)
Expand Down Expand Up @@ -200,9 +201,8 @@ set DISTUTILS_USE_SDK=1
REM The following two lines are needed for Python 2.7, but the support for it is very experimental.
set MSSdk=1
set FORCE_PY27_BUILD=1
REM As for CUDA 8, VS2015 Update 3 is also required to build PyTorch. Use the following two lines.
set "PREBUILD_COMMAND=%VS140COMNTOOLS%\..\..\VC\vcvarsall.bat"
set PREBUILD_COMMAND_ARGS=x64
REM As for CUDA 8, VS2015 Update 3 is also required to build PyTorch. Use the following line.
set "CUDA_HOST_COMPILER=%VS140COMNTOOLS%\..\..\VC\bin\amd64\cl.exe"

call "%VS150COMNTOOLS%\vcvarsall.bat" x64 -vcvars_ver=14.11
python setup.py install
Expand All @@ -224,6 +224,18 @@ Please note that PyTorch uses shared memory to share data between processes, so
for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you
should increase shared memory size either with `--ipc=host` or `--shm-size` command line options to `nvidia-docker run`.

### Building the Documentation

To build documentation in various formats, you will need Sphinx and the
readthedocs theme.

```
cd docs/
pip install -r requirements.txt
```
You can then build the documentation by running ``make <format>`` from the
``docs/`` folder. Run ``make`` to get a list of all available output formats.

### Previous Versions

Installation instructions and binaries for previous PyTorch versions may be found
Expand Down
Loading