Skip to content

Sync with master branch #94

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 104 commits into from
Aug 2, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
104 commits
Select commit Hold shift + click to select a range
9a9a732
Remove the generation of storage files
cpuhrsch Jul 30, 2018
b4f8c60
Don't use the XML reporter for Catch2. (#10012)
ezyang Jul 30, 2018
73a60ef
Fix Caffe2CTScan error (#9962)
jerryzh168 Jul 30, 2018
ce5f0d4
Enable n-dimensional empty tensors. (#9947)
gchanan Jul 30, 2018
faa96c1
Deal with spaces in einsum equation string (#9994)
t-vi Jul 30, 2018
40a8239
Fix a bug in argument spec (#9958)
zdevito Jul 30, 2018
04939a4
Match parameter names and = default (#9737)
goldsborough Jul 30, 2018
c9eab34
Fix Caffe2 with ATen conda build failure (#10020)
mingzhe09088 Jul 30, 2018
ea3c36b
NumPy Scalar to PyTorch Scalar (#9225)
vishwakftw Jul 30, 2018
6c7fb15
Introduce __array_priority__ on torch.Tensor (#9651)
t-vi Jul 30, 2018
57750bd
Enable ATen in C2 in integration builds to test ONNX ATen conversions…
bddppq Jul 30, 2018
7214754
Check and return when numel() == 0 in Loops.cuh.
gchanan Jul 30, 2018
9987282
Use Retainable as base class for StorageImpl
cpuhrsch Jul 30, 2018
db96a09
Add SIMD version to GFTRL optimizer (#9698)
xiuyanni Jul 30, 2018
e57cb4a
Add a Constant Propagation Pass to the JIT (#8808)
Jul 30, 2018
3e3f40a
Update onnx to latest master (#10024)
onnxbot Jul 30, 2018
e0a0234
Remove C++14 feature (#10022)
Jul 30, 2018
788b2e9
nomnigraph - minor cleanup of Graph.h (#9890)
duc0 Jul 30, 2018
8f0a229
Fix HPTT path for 0-sized inputs.
Jul 31, 2018
51539fa
Add pyyaml into caffe2 requirements.txt for USE_ATEN
bddppq Jul 31, 2018
aa36a5d
Add typing into caffe2 requirements.txt for USE_ATEN (#10047)
bddppq Jul 31, 2018
37a226d
When BUILD_ATEN=OFF, use ATen/core directly (#10019)
ezyang Jul 31, 2018
78b806c
Fix the onnx symbolic for upsample (#10001)
houseroad Jul 31, 2018
6fb9acf
Revert empty n-dim and ATen in C2 integration builds
gchanan Jul 31, 2018
5e5c15d
Add (constant size) TensorLists to JIT, use them in cat and stack nod…
apaszke Jul 31, 2018
bdebdd1
Merge remote-tracking branch 'upstream/master'
iotamudelta Jul 31, 2018
68cbe37
fix the reference link path
103yiran Jul 31, 2018
c2d9d28
Fix typo in tensors.rst (#10073)
mohammad7t Jul 31, 2018
0c11101
Prepare THNN/THCUNN for first class scalars. (#10023)
gchanan Jul 31, 2018
cba03e2
Handle dynamic repeats in onnx symbolic (#10052)
bddppq Jul 31, 2018
f779202
Correctly set CAFFE2_DISABLE_NUMA when USE_NUMA=OFF in cmake (#10061)
bddppq Jul 31, 2018
430e444
Delete some obsolete steps in the ROCm build. (#10005)
ezyang Jul 31, 2018
685224a
Add CTC loss (#9628)
t-vi Jul 31, 2018
1ae520c
Add AT_CHECK for null storage. (#9823)
ezyang Jul 31, 2018
371a786
Errors out when Openmpi < 2.x.x with distributed. (#10015)
Jul 31, 2018
81f78a8
Merge remote-tracking branch 'upstream/master'
iotamudelta Jul 31, 2018
11df981
Missed one removal.
iotamudelta Jul 31, 2018
56d1a82
Add shape inference when converting from onnx to caffe2 (#10037)
houseroad Jul 31, 2018
2422801
fix _pointwise_loss for target gradients (#10018)
Jul 31, 2018
ee17ed6
Add missing dependencies (#10086)
houseroad Jul 31, 2018
58fd6e1
Also add ATen/core tests to oss CI (#10029)
smessmer Jul 31, 2018
1f13453
Slightly relax the constraints on argument and return types to script…
apaszke Jul 31, 2018
d217856
Remove some unnecessary includes. (#10085)
ezyang Jul 31, 2018
e04f8bb
Add virtual dtor for ideep context (#10059)
Jul 31, 2018
ba5d33b
Re-Enable ATen in C2 in integration builds to test ONNX ATen conversions
bddppq Jul 31, 2018
34c7c56
Re-enable empty n-dimensional empty tensor and fix parallel CPU on em…
gchanan Jul 31, 2018
bf744be
Parse and register schema declarations lazily (#9801)
zdevito Aug 1, 2018
ceb0f14
Fix SpatialBN Fusion (#10044)
bwasti Aug 1, 2018
c54d71b
Upgrade old transform passes to newer APIs (#10046)
bwasti Aug 1, 2018
9c0f65f
Remove While op stuff (#10102)
bwasti Aug 1, 2018
799c947
add .gitattributes for EOL conversion. (#9813)
shkit Aug 1, 2018
f2412fb
Allow multiple ops.def and clean up code gen in general
bwasti Aug 1, 2018
aae3732
fixed a newly introduced regression in softmax (#10066)
Aug 1, 2018
294c065
Changed serialization mechanism of LambdaLR scheduler (#9927)
0phoff Aug 1, 2018
7d2bda7
Move DDP broadcast coalesced to C++ (#9729)
goldsborough Aug 1, 2018
fcd567e
Enable Optimization on mobile by default
bwasti Aug 1, 2018
ec807f2
Bail out if netdef has disable_nomnigraph argument
bwasti Aug 1, 2018
3d24704
Force sync device when ops are sampled for observation
Aug 1, 2018
5bd43a7
Refactor Seq2SeqModelCaffe2EnsembleDecoder (#10035)
pritamdamania Aug 1, 2018
6f6a1f2
fix test_load_error_msg failure (Network is unreachable) (#10021)
weiyangfb Aug 1, 2018
6fc75ea
Add CELU activation to pytorch (#8551)
zasdfgbnm Aug 1, 2018
43b1512
Move grid sampler to ATen (#9961)
ssnl Aug 1, 2018
b503109
Guard sizes/strides in THCUNN for scalars.
gchanan Aug 1, 2018
fa6b28b
Move ArrayRef, Backtrace, Error, SmallVector, optional to ATen/core; …
ezyang Aug 1, 2018
2f848ec
Use new PyTorch API to make code simpler
zuoxingdong Aug 1, 2018
ee964c5
NegativeBinomial distribution (#9345)
kashif Aug 1, 2018
a2a7b0c
Initial documentation for building libtorch (#10087)
anderspapitto Aug 1, 2018
f1964c4
Update eigen submodule to fix BUILD_ATEN issue (#10095)
mingzhe09088 Aug 1, 2018
87d57dc
Simplified Operator (#10080)
goldsborough Aug 1, 2018
4070005
Move C++17.h to ATen/core (#10107)
smessmer Aug 1, 2018
f126687
Add a dump() method to IR Node's. (#10106)
Aug 1, 2018
e8f2731
fix a couple problems with libtorch cmake file (#10091)
anderspapitto Aug 1, 2018
5a44be5
Minor nit in comment in CMakeLists.txt
ezyang Aug 1, 2018
f908b2b
Use google protobuf in pytorch onnx import/export
Aug 1, 2018
2d6738e
Fix lint in ATen/core (but not ArrayRef)
ezyang Aug 1, 2018
59af5b9
Move UniqueVoidPtr to ATen/core and apply lint
ezyang Aug 1, 2018
3a9dc0f
ROCM: Escape RPATH when linking with hipcc
mwootton Aug 1, 2018
2d56b5c
Prepare THC for first class scalars (0-dimensional tensors).
gchanan Aug 1, 2018
fb24c52
Prepare TH for first class scalars (0-dimensional tensors).
gchanan Aug 1, 2018
1b1c47d
Update onnx to onnx/onnx@32ac71b (#10126)
onnxbot Aug 1, 2018
ad6d622
Add torch.compiled_with_cxx11_abi(). (#10071)
zou3519 Aug 1, 2018
e2846c3
Improve ArrayRef (#9610)
smessmer Aug 1, 2018
080ae5e
Remove implicit ArrayRef -> vector conversion (#9740)
smessmer Aug 1, 2018
edb9038
Lint ArrayRef.h (#10129)
smessmer Aug 1, 2018
1d427fd
Delete type_ field from TensorImpl, replaced with backend_/scalar_typ…
ezyang Aug 1, 2018
1f6888b
Allow mobile exporter to export string arrays (#10017)
pushkartripathi Aug 1, 2018
4ed5b92
#8518 Support for empty tuples (#10027)
jramseyer Aug 1, 2018
59c355c
Move halfbits2float and float2halfbits conversions to ATen. (#10134)
ezyang Aug 2, 2018
806854a
Pin AMD gpu id in Caffe2 CI (#10144)
bddppq Aug 2, 2018
24bb8ce
Move ATen/Half to ATen/core, and apply lint (#10137)
ezyang Aug 2, 2018
a44d9d6
Fix tensor check logic in logging (#10138)
Aug 2, 2018
191482f
Distinguish TupleLiteral from ListLiteral (#10128)
suo Aug 2, 2018
6b338c8
Implement torch.broadcast_tensors (#10075)
zou3519 Aug 2, 2018
8cc7d33
Renumber typeid.h so that the number lines up with ScalarType (#10139)
ezyang Aug 2, 2018
5699250
Move IdWrapper to ATen/core (#10152)
ezyang Aug 2, 2018
8a25acb
Use angle brackets instead of quotes for includes.
ezyang Aug 2, 2018
57061d6
Auto-batching IR transformation for control flow (#9392)
ChunliF Aug 2, 2018
acbc274
fix bug in 3d group convolution (#9860)
stephenyan1231 Aug 2, 2018
4a5cd4f
nomnigraph - new utility for graph transformation (#10081)
duc0 Aug 2, 2018
99dda1e
Merge pull request #82 from iotamudelta/master
iotamudelta Aug 2, 2018
e220141
Merge remote-tracking branch 'upstream/master'
iotamudelta Aug 2, 2018
59a4ef4
Merge pull request #86 from mwootton/rocm_linking
iotamudelta Aug 2, 2018
8035a4b
Merge pull request #90 from iotamudelta/master
iotamudelta Aug 2, 2018
8602912
Merge branch 'master' of https://github.com/ROCmSoftwarePlatform/pyto…
Aug 2, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
5 changes: 4 additions & 1 deletion .clang-tidy
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,15 @@
# NOTE: there must be no spaces before the '-', so put the comma first.
Checks: '
*
,clang-analyzer-*
,modernize-*
,-cert-err58-cpp
,-cert-err60-cpp
,-clang-diagnostic-*
,-cppcoreguidelines-owning-memory
,-cppcoreguidelines-pro-bounds-array-to-pointer-decay
,-cppcoreguidelines-pro-bounds-constant-array-index
,-cppcoreguidelines-pro-type-member-init
,-cppcoreguidelines-pro-type-static-cast-downcast
,-cppcoreguidelines-pro-type-vararg
,-cppcoreguidelines-special-member-functions
Expand All @@ -23,9 +25,11 @@ Checks: '
,-hicpp-braces-around-statements
,-hicpp-explicit-conversions
,-hicpp-no-array-decay
,-hicpp-signed-bitwise
,-hicpp-special-member-functions
,-hicpp-vararg
,-llvm-header-guard
,-llvm-include-order
,-llvm-namespace-comment
,-misc-unused-parameters
,-modernize-make-unique
Expand All @@ -34,7 +38,6 @@ Checks: '
,-readability-braces-around-statements
,-readability-else-after-return
,-readability-named-parameter
,clang-analyzer-*
'
WarningsAsErrors: ''
HeaderFilterRegex: 'torch/csrc/'
Expand Down
1 change: 1 addition & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*.bat text eol=crlf
2 changes: 1 addition & 1 deletion .jenkins/caffe2/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ CMAKE_ARGS+=("-DUSE_OBSERVERS=ON")
CMAKE_ARGS+=("-DUSE_ZSTD=ON")
CMAKE_ARGS+=("-DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX}")

if [[ $BUILD_ENVIRONMENT == *-aten-* ]]; then
if [[ $BUILD_ENVIRONMENT == *-aten-* || -n "$INTEGRATED" ]]; then
if [[ CMAKE_ARGS != *USE_ATEN* ]] && [[ CMAKE_ARGS != *BUILD_ATEN* ]]; then
CMAKE_ARGS+=("-DBUILD_ATEN=ON")
fi
Expand Down
12 changes: 11 additions & 1 deletion .jenkins/caffe2/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,13 @@ for test in $(find "${INSTALL_PREFIX}/test" -executable -type f); do
;;
*/aten/*)
# ATen uses test framework Catch2
"$test" -r=xml -o "${junit_reports_dir}/$(basename $test).xml"
# NB: We do NOT use the xml test reporter, because
# Catch doesn't support multiple reporters
# c.f. https://github.com/catchorg/Catch2/blob/master/docs/release-notes.md#223
# which means that enabling XML output means you lose useful stdout
# output for Jenkins. It's more important to have useful console
# output than it is to have XML output for Jenkins.
"$test"
;;
*)
"$test" --gtest_output=xml:"$gtest_reports_dir/$(basename $test).xml"
Expand Down Expand Up @@ -109,6 +115,10 @@ if [[ $BUILD_ENVIRONMENT == *-rocm* ]]; then
# Our cuda top_k op has some asm code, the hipified version doesn't
# compile yet, so we don't have top_k operator for now
rocm_ignore_test+=("--ignore $CAFFE2_PYPATH/python/operator_test/top_k_test.py")

# Our AMD CI boxes have 4 gpus on each
# Remove this once we have added multi-gpu support
export HIP_VISIBLE_DEVICES=$(($BUILD_NUMBER % 4))
fi

# Python tests
Expand Down
9 changes: 3 additions & 6 deletions .jenkins/pytorch/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -43,12 +43,9 @@ if [[ "$BUILD_ENVIRONMENT" == *rocm* ]]; then
# https://github.com/RadeonOpenCompute/hcc#hcc-with-thinlto-linking
export KMTHINLTO=1

sudo chown -R jenkins:jenkins /usr/local
rm -rf "$(dirname "${BASH_SOURCE[0]}")/../../../pytorch_amd/" || true
python "$(dirname "${BASH_SOURCE[0]}")/../../tools/amd_build/build_pytorch_amd.py"

USE_ROCM=1 python setup.py install
exit
python tools/amd_build/build_pytorch_amd.py
USE_ROCM=1 python setup.py install --user
exit 0
fi

# TODO: Don't install this here
Expand Down
6 changes: 5 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -214,9 +214,10 @@ if(NOT MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-strict-overflow")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-strict-aliasing")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=deprecated-declarations")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-stringop-overflow")
# These flags are not available in GCC-4.8.5. Set only when using clang.
# Compared against https://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Option-Summary.html
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-invalid-partial-specialization")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-typedef-redefinition")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-unknown-warning-option")
Expand All @@ -226,6 +227,7 @@ if(NOT MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-c++14-extensions")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-constexpr-not-const")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-missing-braces")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Qunused-arguments")
endif()
if ((APPLE AND (NOT ("${CLANG_VERSION_STRING}" VERSION_LESS "9.0")))
OR (CMAKE_COMPILER_IS_GNUCXX
Expand Down Expand Up @@ -284,6 +286,8 @@ include_directories(BEFORE ${PROJECT_SOURCE_DIR})
# in PROJECT_SOURCE_DIR.
include_directories(BEFORE ${PROJECT_BINARY_DIR})

include_directories(BEFORE ${PROJECT_SOURCE_DIR}/aten/src/)

# ---[ Old caffe protobuf
if(BUILD_CAFFE2)
add_subdirectory(caffe/proto)
Expand Down
1 change: 1 addition & 0 deletions aten/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -146,4 +146,5 @@ if (CAFFE2_CMAKE_BUILDING_WITH_MAIN_REPO)
set(ATen_THIRD_PARTY_INCLUDE ${ATen_THIRD_PARTY_INCLUDE} PARENT_SCOPE)
set(ATen_CPU_DEPENDENCY_LIBS ${ATen_CPU_DEPENDENCY_LIBS} PARENT_SCOPE)
set(ATen_CUDA_DEPENDENCY_LIBS ${ATen_CUDA_DEPENDENCY_LIBS} PARENT_SCOPE)
set(ATen_CORE_TEST_SRCS ${ATen_CORE_TEST_SRCS} PARENT_SCOPE)
endif()
2 changes: 1 addition & 1 deletion aten/src/ATen/Allocator.h
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
#include <ATen/Error.h>
#include <ATen/Retainable.h>
#include <ATen/Device.h>
#include <ATen/detail/UniqueVoidPtr.h>
#include <ATen/core/UniqueVoidPtr.h>

namespace at {

Expand Down
1 change: 1 addition & 0 deletions aten/src/ATen/ArrayRef.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
#include <ATen/ArrayRef.h>
192 changes: 1 addition & 191 deletions aten/src/ATen/ArrayRef.h
Original file line number Diff line number Diff line change
@@ -1,192 +1,2 @@
//===--- ArrayRef.h - Array Reference Wrapper -------------------*- C++ -*-===//
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//

// ATen: modified from llvm::ArrayRef.
// removed llvm-specific functionality
// removed some implicit const -> non-const conversions that rely on
// complicated std::enable_if meta-programming
// removed a bunch of slice variants for simplicity...

#pragma once

#include <ATen/Error.h>
#include <ATen/SmallVector.h>

#include <array>
#include <iterator>
#include <vector>

namespace at {
/// ArrayRef - Represent a constant reference to an array (0 or more elements
/// consecutively in memory), i.e. a start pointer and a length. It allows
/// various APIs to take consecutive elements easily and conveniently.
///
/// This class does not own the underlying data, it is expected to be used in
/// situations where the data resides in some other buffer, whose lifetime
/// extends past that of the ArrayRef. For this reason, it is not in general
/// safe to store an ArrayRef.
///
/// This is intended to be trivially copyable, so it should be passed by
/// value.
template<typename T>
class ArrayRef {
public:
typedef const T *iterator;
typedef const T *const_iterator;
typedef size_t size_type;

typedef std::reverse_iterator<iterator> reverse_iterator;

private:
/// The start of the array, in an external buffer.
const T *Data;

/// The number of elements.
size_type Length;

public:
/// @name Constructors
/// @{

/// Construct an empty ArrayRef.
/*implicit*/ ArrayRef() : Data(nullptr), Length(0) {}

/// Construct an ArrayRef from a single element.
/*implicit*/ ArrayRef(const T &OneElt)
: Data(&OneElt), Length(1) {}

/// Construct an ArrayRef from a pointer and length.
/*implicit*/ ArrayRef(const T *data, size_t length)
: Data(data), Length(length) {}

/// Construct an ArrayRef from a range.
ArrayRef(const T *begin, const T *end)
: Data(begin), Length(end - begin) {}

/// Construct an ArrayRef from a SmallVector. This is templated in order to
/// avoid instantiating SmallVectorTemplateCommon<T> whenever we
/// copy-construct an ArrayRef.
template<typename U>
/*implicit*/ ArrayRef(const SmallVectorTemplateCommon<T, U> &Vec)
: Data(Vec.data()), Length(Vec.size()) {
}

/// Construct an ArrayRef from a std::vector.
template<typename A>
/*implicit*/ ArrayRef(const std::vector<T, A> &Vec)
: Data(Vec.data()), Length(Vec.size()) {}

/// Construct an ArrayRef from a std::array
template <size_t N>
/*implicit*/ constexpr ArrayRef(const std::array<T, N> &Arr)
: Data(Arr.data()), Length(N) {}

/// Construct an ArrayRef from a C array.
template <size_t N>
/*implicit*/ constexpr ArrayRef(const T (&Arr)[N]) : Data(Arr), Length(N) {}

/// Construct an ArrayRef from a std::initializer_list.
/*implicit*/ ArrayRef(const std::initializer_list<T> &Vec)
: Data(Vec.begin() == Vec.end() ? (T*)nullptr : Vec.begin()),
Length(Vec.size()) {}

/// @}
/// @name Simple Operations
/// @{

const_iterator begin() const { return Data; }
const_iterator end() const { return Data + Length; }

reverse_iterator rbegin() const { return reverse_iterator(end()); }
reverse_iterator rend() const { return reverse_iterator(begin()); }

/// empty - Check if the array is empty.
bool empty() const { return Length == 0; }

const T *data() const { return Data; }

/// size - Get the array size.
size_t size() const { return Length; }

/// front - Get the first element.
const T &front() const {
AT_CHECK(!empty(), "ArrayRef: attempted to access front() of empty list");
return Data[0];
}

/// back - Get the last element.
const T &back() const {
AT_CHECK(!empty(), "ArrayRef: attempted to access back() of empty list");
return Data[Length-1];
}

/// equals - Check for element-wise equality.
bool equals(ArrayRef RHS) const {
if (Length != RHS.Length)
return false;
return std::equal(begin(), end(), RHS.begin());
}

/// slice(n, m) - Chop off the first N elements of the array, and keep M
/// elements in the array.
ArrayRef<T> slice(size_t N, size_t M) const {
AT_CHECK(N+M <= size(), "ArrayRef: invalid slice, ", N, " + ", M, " is not <= ", size());
return ArrayRef<T>(data()+N, M);
}

/// slice(n) - Chop off the first N elements of the array.
ArrayRef<T> slice(size_t N) const { return slice(N, size() - N); }

/// @}
/// @name Operator Overloads
/// @{
const T &operator[](size_t Index) const {
return Data[Index];
}

/// Vector compatibility
const T &at(size_t Index) const {
AT_CHECK(Index < Length, "ArrayRef: invalid index ", Index, " for length ", Length);
return Data[Index];
}

/// Disallow accidental assignment from a temporary.
///
/// The declaration here is extra complicated so that "arrayRef = {}"
/// continues to select the move assignment operator.
template <typename U>
typename std::enable_if<std::is_same<U, T>::value, ArrayRef<T>>::type &
operator=(U &&Temporary) = delete;

/// Disallow accidental assignment from a temporary.
///
/// The declaration here is extra complicated so that "arrayRef = {}"
/// continues to select the move assignment operator.
template <typename U>
typename std::enable_if<std::is_same<U, T>::value, ArrayRef<T>>::type &
operator=(std::initializer_list<U>) = delete;

/// @}
/// @name Expensive Operations
/// @{
std::vector<T> vec() const {
return std::vector<T>(Data, Data+Length);
}

/// @}
/// @name Conversion operators
/// @{
operator std::vector<T>() const {
return std::vector<T>(Data, Data+Length);
}

/// @}
};

} // end namespace at
#include <ATen/core/ArrayRef.h>
28 changes: 1 addition & 27 deletions aten/src/ATen/Backtrace.h
Original file line number Diff line number Diff line change
@@ -1,28 +1,2 @@
#pragma once

#include <cstddef>
#include <string>
#include <typeinfo>

#include <ATen/ATenGeneral.h>

namespace at {
/// Utility to demangle a C++ symbol name.
AT_API std::string demangle(const char* name);

/// Returns the printable name of the type.
template <typename T>
inline const char* demangle_type() {
#ifdef __GXX_RTTI
static const std::string name = demangle(typeid(T).name());
return name.c_str();
#else // __GXX_RTTI
return "(RTTI disabled, cannot show name)";
#endif // __GXX_RTTI
}

AT_API std::string get_backtrace(
size_t frames_to_skip = 0,
size_t maximum_number_of_frames = 64,
bool skip_python_frames = true);
} // namespace at
#include <ATen/core/Backtrace.h>
6 changes: 4 additions & 2 deletions aten/src/ATen/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ CONFIGURE_FILE(cuda/CUDAConfig.h.in "${CMAKE_CURRENT_SOURCE_DIR}/cuda/CUDAConfig
# NB: If you edit these globs, you'll have to update setup.py package_data as well
FILE(GLOB base_h "*.h" "detail/*.h")
FILE(GLOB base_cpp "*.cpp" "detail/*.cpp")
add_subdirectory(core)
FILE(GLOB cuda_h "cuda/*.h" "cuda/detail/*.h" "cuda/*.cuh" "cuda/detail/*.cuh")
FILE(GLOB cuda_cpp "cuda/*.cpp" "cuda/detail/*.cpp")
FILE(GLOB cuda_cu "cuda/*.cu" "cuda/detail/*.cu")
Expand All @@ -62,7 +63,7 @@ FILE(GLOB native_cuda_cpp "native/cuda/*.cpp")
FILE(GLOB native_mkl_cpp "native/mkl/*.cpp")
FILE(GLOB native_mkldnn_cpp "native/mkldnn/*.cpp")

set(all_cpu_cpp ${base_cpp} ${native_cpp} ${native_sparse_cpp} ${native_mkl_cpp} ${native_mkldnn_cpp} ${generated_cpp} ${ATen_CPU_SRCS} ${cpu_kernel_cpp})
set(all_cpu_cpp ${base_cpp} ${ATen_CORE_SRCS} ${native_cpp} ${native_sparse_cpp} ${native_mkl_cpp} ${native_mkldnn_cpp} ${generated_cpp} ${ATen_CPU_SRCS} ${cpu_kernel_cpp})
if(AT_MKL_ENABLED)
set(all_cpu_cpp ${all_cpu_cpp} ${mkl_cpp})
endif()
Expand Down Expand Up @@ -393,7 +394,7 @@ INSTALL(FILES "${CMAKE_CURRENT_BINARY_DIR}/cmake-exports/ATenConfig.cmake"
DESTINATION "${AT_INSTALL_SHARE_DIR}/cmake/ATen")

# https://stackoverflow.com/questions/11096471/how-can-i-install-a-hierarchy-of-files-using-cmake
FOREACH(HEADER ${base_h} ${cuda_h} ${cudnn_h})
FOREACH(HEADER ${base_h} ${ATen_CORE_HEADERS} ${cuda_h} ${cudnn_h})
string(REPLACE "${CMAKE_CURRENT_SOURCE_DIR}/" "" HEADER_SUB ${HEADER})
GET_FILENAME_COMPONENT(DIR ${HEADER_SUB} DIRECTORY)
INSTALL(FILES ${HEADER} DESTINATION ${AT_INSTALL_INCLUDE_DIR}/ATen/${DIR})
Expand Down Expand Up @@ -444,6 +445,7 @@ if (NOT CAFFE2_CMAKE_BUILDING_WITH_MAIN_REPO)
endif()

# Pass source, includes, and libs to parent
set(ATen_CORE_SRCS ${ATen_CORE_SRCS} PARENT_SCOPE)
set(ATen_CPU_SRCS ${ATen_CPU_SRCS} PARENT_SCOPE)
set(ATen_CUDA_SRCS ${ATen_CUDA_SRCS} PARENT_SCOPE)
set(ATen_CPU_TEST_SRCS ${ATen_CPU_TEST_SRCS} PARENT_SCOPE)
Expand Down
Loading