Skip to content

Merge from upstream #167

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 74 commits into from
Sep 3, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
74 commits
Select commit Hold shift + click to select a range
4022767
Add strides to caffe2::Tensor (#10826)
jerryzh168 Aug 30, 2018
029082e
Add entry for torch/lib/pythonX.Y in .gitignore (#11083)
apaszke Aug 30, 2018
611a608
Add ATen pdist CPU kernel (#10782)
erikbrinkman Aug 30, 2018
23b0c90
caffe2: fix gcc8 warnings
pixelb Aug 30, 2018
9d4360c
Creates stream pool (#9938)
mruberry Aug 30, 2018
e85f3fc
Fix relying on UB in test_data_parallel_nested_output (#11092)
ssnl Aug 30, 2018
ebe9d20
Add test cases to intrusive_ptr (#11026)
smessmer Aug 30, 2018
93bd291
Change torch.jit.trace to no longer be a decorator (#11069)
zdevito Aug 30, 2018
f0142fa
Expose arbitrary cpp autograd functions to Python (#11082)
apaszke Aug 30, 2018
a136d29
Use intrusive_ptr in Storage (#10907)
jerryzh168 Aug 30, 2018
56c737a
Inject GetEmptyStringAlreadyInited once for static proto (#11045)
orionr Aug 30, 2018
302e9cb
Update onnx submodule to onnx/onnx@bae6333 (#10961)
bddppq Aug 30, 2018
7ddc6f8
NULL -> nullptr (#11047)
goldsborough Aug 30, 2018
684bd1b
size_ -> numel_ (#11112)
jerryzh168 Aug 30, 2018
15314c7
GCC-7 doesn't like the original syntax. (#10665)
xkszltl Aug 30, 2018
a6cb414
update documentation for observers
Aug 31, 2018
26409a4
Caffe2 flags needs to be used after the GlobalInit function is called
sf-wind Aug 31, 2018
c8c21fa
Allow same flags when glog is used or not (#11034)
orionr Aug 31, 2018
f3c3127
Don't flatten output lists in the JIT IR (#10949)
apaszke Aug 31, 2018
66c4d7e
Rename getTypeOpt to getNonVariableTypeOpt (#11077)
ezyang Aug 31, 2018
c283acc
Rename getTypeRaw to getNonVariableTypeRaw (#11078)
ezyang Aug 31, 2018
34a0604
Eliminate use of getType from DLConvertor (#11080)
ezyang Aug 31, 2018
c836a04
Delete a bunch of uses of getType in favor of TensorOptions.
ezyang Aug 31, 2018
750ede7
Rename getType to getVariableTypeFromBaseType / getVariableType (#11095)
ezyang Aug 31, 2018
a320e5c
Move static_context outside of class (#11097)
jerryzh168 Aug 31, 2018
00df09b
Change specialization rules in GraphExecutors (#10977)
apaszke Aug 31, 2018
9fae8fc
framework for committed serialized tests (#10594)
ajyu Aug 31, 2018
f1bfe67
Back out "[caffe2] Update blackbox predictor with new constructor" (#…
Aug 31, 2018
0555768
Support lr adaption for SparseAdam and RowWiseSparseAdam (#10993)
Aug 31, 2018
82aeebb
Fix a bug in addmm fusion in the JIT (#11100)
apaszke Aug 31, 2018
3073051
Revert D9554375: Support lr adaption for SparseAdam and RowWiseSparse…
ezyang Aug 31, 2018
0961c92
Unbreak the build
Aug 31, 2018
9fac0a5
Rename at::getType to at::getNonVariableType (#11096)
ezyang Aug 31, 2018
1db5a7d
Move variable getType lookup support to Context
ezyang Aug 31, 2018
f30fd7f
Get rid of the runtime type in TensorOptions (#11021)
ezyang Aug 31, 2018
e2bdd35
fixes to device.cc (#11122)
Aug 31, 2018
a585158
Some usage examples for TensorOptions
ezyang Aug 31, 2018
d95e68c
Delete Tensor constructor from TensorOptions. (#11101)
ezyang Aug 31, 2018
3791bd1
PT1 Release Milestone No.2 MPI Group Support with all tests passed (#…
teng-li Aug 31, 2018
a2a584f
Proper recompilation tracking for more files in tools/autograd (#11143)
ezyang Aug 31, 2018
6508db7
Remove BUILD_CAFFE2 and build everything (#8338)
orionr Aug 31, 2018
f4b2961
Simplify assignment operators (#11027)
smessmer Aug 31, 2018
c31ebcc
Clean up TupleType and SchemaParser (#11007)
goldsborough Aug 31, 2018
780d279
Warn about non-traceable behavior when tracing (#11088)
apaszke Aug 31, 2018
48c2f3c
Move TensorOptions Tensor methods to TensorMethods.h (#11144)
ezyang Aug 31, 2018
fd11041
Don't convert TensorOptions to type before printing.
ezyang Aug 31, 2018
2c5ae8c
Get rid of type() method on TensorOptions; use at::getType instead (#…
ezyang Aug 31, 2018
5286925
Add getMaybeVariableType(const TensorImpl*) (#11031)
ezyang Aug 31, 2018
adeebed
Delete TensorImpl::toString() (#11035)
ezyang Aug 31, 2018
c87d082
Use ->data<real>() instead of THTensor_(data) and c10::raw::intrusive…
cpuhrsch Aug 31, 2018
3081c8e
Lower trivial differentiable subgraphs (#11110)
apaszke Aug 31, 2018
5987b44
Remove aten doc/ folder (#11158)
goldsborough Aug 31, 2018
c48bf3a
Automatic update of fbcode/onnx to 1b09eb14c2c781fae078fa6b1c0390ba6f…
houseroad Aug 31, 2018
4abddad
use py::str to remove deprecation warnings (#11107)
goldsborough Aug 31, 2018
861e1c4
Move StorageImpl and Storage to core (#11154)
cpuhrsch Aug 31, 2018
03c06ec
Traceable detach (#11038)
Aug 31, 2018
1b7172a
fix the slice onnx exporting
houseroad Sep 1, 2018
b834d91
Revert D9566744: [New Checkpoint] Kill the dummy TaskOutput when task…
xush6528 Sep 1, 2018
b3d559c
Optimize WeightedSumOp for two inputs (#11049)
xiaomengy Sep 1, 2018
43e73f8
Dont optimize slicing dispatch when we are tracing (#11156)
Sep 2, 2018
f60a2b6
allow spaces in filename for jit-compiled cpp_extensions (#11146)
soumith Sep 2, 2018
1506547
Disable -Werror on macOS test build (#11090)
apaszke Sep 2, 2018
011f615
Fix compile warnings
ssnl Sep 2, 2018
7af6f95
Move TensorAccessor to ATen/core
ezyang Sep 2, 2018
7eba984
Pool constants during script compilation. (#10231)
Sep 2, 2018
1350f76
Fix max and min with inf on CUDA (#11091)
ssnl Sep 2, 2018
4d28b65
fix serialization of nn.Parameter with dill (#10296)
elanmart Sep 2, 2018
abe8b33
LowRankMultivariateNormal cleanup
samuela Sep 2, 2018
33c7cc1
improve docker packages, fix bugs, enable tests, enable FFT (#10893)
iotamudelta Sep 2, 2018
593d740
Document torch.allclose (#11185)
vishwakftw Sep 2, 2018
cf10efb
Fixes unclear exception message for F.conv2d (#11053)
ptrblck Sep 2, 2018
e1a17d5
Should not use CAFFE2_API when definition is already in header. (#11114)
tolia-msft Sep 2, 2018
f543557
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 2, 2018
c9cf2c9
Fails on ROCm currently.
iotamudelta Sep 2, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 1 addition & 0 deletions .clang-tidy
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ Checks: '
,-performance-unnecessary-value-param
,-readability-braces-around-statements
,-readability-else-after-return
,-readability-implicit-bool-conversion
,-readability-named-parameter
'
WarningsAsErrors: ''
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ torch/lib/pkgconfig
torch/lib/protoc
torch/lib/tmp_install
torch/lib/torch_shm_manager
torch/lib/python*
torch/version.py

# IPython notebook checkpoints
Expand Down
14 changes: 11 additions & 3 deletions .jenkins/caffe2/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -218,13 +218,21 @@ if [[ -z "$INTEGRATED" ]]; then

else

# sccache will be stuck if all cores are used for compiling
# see https://github.com/pytorch/pytorch/pull/7361
if [[ -n "${SCCACHE}" ]]; then
export MAX_JOBS=`expr $(nproc) - 1`
fi

FULL_CAFFE2=1 python setup.py install --user
# TODO: I'm not sure why this is necessary

# This is to save test binaries for testing
cp -r torch/lib/tmp_install $INSTALL_PREFIX

fi
ls $INSTALL_PREFIX

report_compile_cache_stats
report_compile_cache_stats
fi


###############################################################################
Expand Down
4 changes: 3 additions & 1 deletion .jenkins/caffe2/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ fi

mkdir -p $TEST_DIR/{cpp,python}

cd ${INSTALL_PREFIX}
cd "${WORKSPACE}"

# C++ tests
echo "Running C++ tests.."
Expand Down Expand Up @@ -137,6 +137,8 @@ echo "Running Python tests.."
"$CAFFE2_PYPATH/python" \
"${EXTRA_TESTS[@]}"

cd ${INSTALL_PREFIX}

if [[ -n "$INTEGRATED" ]]; then
pip install --user torchvision
"$ROOT_DIR/scripts/onnx/test.sh"
Expand Down
3 changes: 1 addition & 2 deletions .jenkins/pytorch/common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -112,8 +112,7 @@ else
exit 1
fi

if [[ "$BUILD_ENVIRONMENT" == *pytorch-linux-xenial-cuda9-cudnn7-py3 ]] || \
[[ "$BUILD_ENVIRONMENT" == *pytorch-linux-trusty-py3.6-gcc7* ]]; then
if [[ "$BUILD_ENVIRONMENT" == *pytorch-linux-trusty-py3.6-gcc7* ]]; then
BUILD_TEST_LIBTORCH=1
else
BUILD_TEST_LIBTORCH=0
Expand Down
2 changes: 1 addition & 1 deletion .jenkins/pytorch/macos-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ test_cpp_api() {

BUILD_LIBTORCH_PY=$PWD/tools/build_libtorch.py
pushd $CPP_BUILD/caffe2
WERROR=1 VERBOSE=1 DEBUG=1 python $BUILD_LIBTORCH_PY
VERBOSE=1 DEBUG=1 python $BUILD_LIBTORCH_PY
popd

python tools/download_mnist.py --quiet -d test/cpp/api/mnist
Expand Down
85 changes: 42 additions & 43 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,6 @@ endif()
# cmake/Summary.cmake so that the summary prints out the option values.
include(CMakeDependentOption)
option(BUILD_TORCH "Build Torch" OFF)
option(BUILD_CAFFE2 "Build Caffe2" ON)
option(ATEN_NO_TEST "Do not build ATen test binaries" OFF)
option(BUILD_ATEN_MOBILE "Build ATen for Android and iOS" OFF)
option(BUILD_BINARY "Build C++ binaries" ON)
Expand All @@ -68,9 +67,7 @@ cmake_dependent_option(
cmake_dependent_option(
CAFFE2_USE_MSVC_STATIC_RUNTIME "Using MSVC static runtime libraries" ON
"NOT BUILD_SHARED_LIBS" OFF)
cmake_dependent_option(
BUILD_TEST "Build Caffe2 C++ test binaries (need gtest and gbenchmark)" OFF
"BUILD_CAFFE2" OFF)
option(BUILD_TEST "Build C++ test binaries (need gtest and gbenchmark)" OFF)
cmake_dependent_option(
INSTALL_TEST "Install test binaries if BUILD_TEST is on" OFF
"BUILD_TEST" OFF)
Expand All @@ -83,32 +80,16 @@ cmake_dependent_option(
USE_CUDNN "Use cuDNN" ON
"USE_CUDA" OFF)
option(USE_FFMPEG "Use ffmpeg" OFF)
cmake_dependent_option(
USE_GFLAGS "Use GFLAGS" ON
"BUILD_CAFFE2" OFF)
cmake_dependent_option(
USE_GLOG "Use GLOG" ON
"BUILD_CAFFE2" OFF)
cmake_dependent_option(
USE_GLOO "Use Gloo" ON
"BUILD_CAFFE2" OFF)
option(USE_GFLAGS "Use GFLAGS" ON)
option(USE_GLOG "Use GLOG" ON)
option(USE_GLOO "Use Gloo" ON)
option(USE_GLOO_IBVERBS "Use Gloo IB verbs for distributed support" OFF)
cmake_dependent_option(
USE_LEVELDB "Use LEVELDB" ON
"BUILD_CAFFE2" OFF)
option(USE_LEVELDB "Use LEVELDB" ON)
option(USE_LITE_PROTO "Use lite protobuf instead of full." OFF)
cmake_dependent_option(
USE_LMDB "Use LMDB" ON
"BUILD_CAFFE2" OFF)
cmake_dependent_option(
USE_METAL "Use Metal for iOS build" ON
"BUILD_CAFFE2" OFF)
cmake_dependent_option(
USE_MOBILE_OPENGL "Use OpenGL for mobile code" ON
"BUILD_CAFFE2" OFF)
cmake_dependent_option(
USE_MPI "Use MPI" ON
"BUILD_CAFFE2" OFF)
option(USE_LMDB "Use LMDB" ON)
option(USE_METAL "Use Metal for iOS build" ON)
option(USE_MOBILE_OPENGL "Use OpenGL for mobile code" ON)
option(USE_MPI "Use MPI" ON)
option(USE_NATIVE_ARCH "Use -march=native" OFF)
option(USE_NCCL "Use NCCL" ON)
option(USE_SYSTEM_NCCL "Use system-wide NCCL" OFF)
Expand All @@ -121,9 +102,7 @@ cmake_dependent_option(
"USE_CUDA" OFF)
option(USE_OBSERVERS "Use observers module." OFF)
option(USE_OPENCL "Use OpenCL" OFF)
cmake_dependent_option(
USE_OPENCV "Use OpenCV" ON
"BUILD_CAFFE2" OFF)
option(USE_OPENCV "Use OpenCV" ON)
option(USE_OPENMP "Use OpenMP for parallel code" OFF)
option(USE_PROF "Use profiling" OFF)
option(USE_REDIS "Use Redis" OFF)
Expand All @@ -133,17 +112,15 @@ option(USE_TENSORRT "Using Nvidia TensorRT library" OFF)
option(USE_ZMQ "Use ZMQ" OFF)
option(USE_ZSTD "Use ZSTD" OFF)
option(USE_MKLDNN "Use MKLDNN" OFF)
cmake_dependent_option(
USE_IDEEP "Use IDEEP interface in MKL BLAS" ON
"BUILD_CAFFE2" OFF)
cmake_dependent_option(
USE_MKLML "Use MKLML interface in MKL BLAS" ON
"BUILD_CAFFE2" OFF)
option(USE_IDEEP "Use IDEEP interface in MKL BLAS" ON)
option(USE_MKLML "Use MKLML interface in MKL BLAS" ON)
option(USE_DISTRIBUTED "Use THD (distributed)" OFF)

# Used when building Caffe2 through setup.py
option(BUILDING_WITH_TORCH_LIBS "Tell cmake if Caffe2 is being built alongside torch libs" OFF)

SET(ONNX_NAMESPACE "onnx_c2" CACHE STRING "onnx namespace")

if (ANDROID OR IOS)
set(BUILD_ATEN_MOBILE ON)
endif()
Expand Down Expand Up @@ -216,6 +193,9 @@ if(NOT MSVC)
if (CMAKE_COMPILER_IS_GNUCXX AND NOT (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0.0))
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-stringop-overflow")
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=pedantic")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=redundant-decls")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=old-style-cast")
# These flags are not available in GCC-4.8.5. Set only when using clang.
# Compared against https://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Option-Summary.html
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang")
Expand All @@ -238,6 +218,10 @@ if(NOT MSVC)
if ($ENV{WERROR})
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror")
endif($ENV{WERROR})
if (NOT APPLE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-unused-but-set-variable")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-maybe-uninitialized")
endif()
else()
foreach(flag_var
CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE
Expand All @@ -264,6 +248,17 @@ if (USE_ASAN)
set (CMAKE_LINKER_FLAGS_DEBUG "${CMAKE_STATIC_LINKER_FLAGS_DEBUG} -fsanitize=address")
endif()

if (APPLE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-unused-private-field")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-missing-braces")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-c++14-extensions")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-constexpr-not-const")
endif()

if(CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 7.0.0)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-stringop-overflow")
endif()

if(ANDROID)
if(CMAKE_COMPILER_IS_GNUCXX)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -s")
Expand Down Expand Up @@ -400,19 +395,23 @@ else()
endif()

# ---[ Modules
if (BUILD_CAFFE2)
add_subdirectory(modules)
# TODO(orionr): Enable all of this for Windows DLL when we
# can figure out how to get it to build
if (NOT (MSVC AND BUILD_SHARED_LIBS))
add_subdirectory(modules)
endif()

# ---[ Binaries
# Binaries will be built after the Caffe2 main libraries and the modules
# are built. For the binaries, they will be linked to the Caffe2 main
# libraries, as well as all the modules that are built with Caffe2 (the ones
# built in the previous Modules section above).
if (BUILD_CAFFE2)
if (BUILD_BINARY)
add_subdirectory(binaries)
endif()
# TODO(orionr): Enable all of this for Windows DLL when we
# can figure out how to get it to build
if (NOT (MSVC AND BUILD_SHARED_LIBS))
if (BUILD_BINARY)
add_subdirectory(binaries)
endif()
endif()

include(cmake/Summary.cmake)
Expand Down
6 changes: 3 additions & 3 deletions aten/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ does not include templates. That is, there is one `Tensor` type. It can hold a
CPU or CUDA Tensor, and the tensor may have Doubles, Float, Ints, etc. This design
makes it easy to write generic code without templating everything.

See the _generated_ [`Tensor.h` file](doc/Tensor.h) and [`Functions.h` file](doc/Functions.h) for the provided API. Excerpt:
See https://pytorch.org/cppdocs for the provided API. Excerpt:
```c++
Tensor atan2(const Tensor & other) const;
Tensor & atan2_(const Tensor & other);
Expand Down Expand Up @@ -88,7 +88,7 @@ for(auto i = 0; i < 100000; i++) {

Expressions like `CUDA(kFloat)` are first-class `at::Type` objects that represent
the type of a Tensor and are used to create Tensors when their type cannot be
inferred. See the _generated_ [Type header](doc/Type.h) for its API.
inferred.

See more in [sample files](src/ATen/test).

Expand Down Expand Up @@ -165,7 +165,7 @@ behave as normal tensors.
### Scalars and zero-dimensional tensors

In addition to the `Tensor` objects, ATen also includes `Scalar`s that represent a single number.
Like a Tensor, Scalars are dynamically typed and can hold any one of ATen's [number types](doc/Type.h).
Like a Tensor, Scalars are dynamically typed and can hold any one of ATen's number types.
Scalars can be implicitly constructed from C++ number types. Scalars are needed because some functions like `addmm` take numbers along with Tensors and expect these
numbers to be the same dynamic type as the tensor. They are also used in the API to indicate places where
a function will _always_ return a Scalar value, like `sum`.
Expand Down
Loading