Skip to content

Integrate from upstream #225

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 53 commits into from
Sep 27, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
1178851
Get rid of most usages of Type.tensor. (#12002)
gchanan Sep 24, 2018
a9e6a67
Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (…
cpuhrsch Sep 24, 2018
1a1d79e
Remove TIndex typedef from core/common.h (#11993)
cpuhrsch Sep 24, 2018
a6f1ae7
set up c10 scaffolding. Move macros proper first.
Yangqing Sep 24, 2018
ffbac7d
Miscellaneous updates for CUDA 10 (#12017)
syed-ahmed Sep 24, 2018
5141482
Stop moving constants into DifferentiableSubgraphs (#11809)
apaszke Sep 24, 2018
1c09bfd
Make promoteType(half, integer) -> half (#11941)
colesbury Sep 24, 2018
e05d689
Unify C++ API with C++ extensions (#11510)
goldsborough Sep 24, 2018
70e4b3e
Revert D10006069: Remove TIndex typedef from core/common.h
dinhvh Sep 24, 2018
b7c302d
Make gen_jit_dispatch runnable (#12018)
bwasti Sep 24, 2018
3ae6ee4
Move CreateContext to global registry (#11688)
jerryzh168 Sep 24, 2018
a830964
Eliminate no-op adds and muls in peephole pass (#11801)
apaszke Sep 25, 2018
9068a46
Fix deprecated function warning in ONNX model test. (#11827)
Sep 25, 2018
5d4624a
Fix return temporary as reference in MPI backend (#11947)
pietern Sep 25, 2018
86e025f
magma-cuda should reference updated versions (#12000)
Sep 25, 2018
dfa03e9
Fix mispelling of AVAILABLE. (#12016)
ezyang Sep 25, 2018
17a65bf
Removing some dependency edges from Blob to other caffe2 (#11923)
smessmer Sep 25, 2018
3417a1e
Prepend a "const" to a for loop in printPyObject. (#11857)
xuhdev Sep 25, 2018
2cdf98a
Back out "Removing some dependency edges from Blob to other caffe2"
Sep 25, 2018
a165d92
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Sep 25, 2018
71b99f2
Give default values to members of TensorImpl. (#12033)
ezyang Sep 25, 2018
d4ce41c
Rename tensor_impl_ to impl_ in Tensor (#12035)
ezyang Sep 25, 2018
0947712
Move Factory functions from Type to TypeExtendedInterface. (#12025)
gchanan Sep 25, 2018
fcb3ccf
Don't record Git version automatically via cmake (#12046)
ezyang Sep 25, 2018
3deb479
Replace 'struct Tensor' with 'class Tensor'. (#12034)
ezyang Sep 25, 2018
d7e11e3
Revert "Move CreateContext to global registry (#11688)" (#12049)
ezyang Sep 25, 2018
7122f8b
Disable more flaky tests on CircleCI (#11399)
Sep 25, 2018
364ae10
nomnigraph - easy - add some python test helper methods (#12020)
duc0 Sep 25, 2018
94c513c
Improve pybind11 message (#11640)
orionr Sep 25, 2018
8f0db9b
Removing some dependency edges from Blob to other caffe2 (#12043)
smessmer Sep 25, 2018
a106388
Free MAGMA queues after use (#11882)
vishwakftw Sep 25, 2018
b263078
Fix CUDA division by a scalar on large arrays. (#12023)
colesbury Sep 25, 2018
ceadde2
Add some more locations to search for nccl. (#12063)
ezyang Sep 25, 2018
aa1adde
Refactor fastGet/fastSet for clarity, removing a null pointer check. …
ezyang Sep 25, 2018
e53e8df
Support TypeIdentifier::name() (#12036)
ezyang Sep 25, 2018
1e28294
Delete some unused variables. (#12059)
ezyang Sep 25, 2018
b7b9e3c
Fix "identifier following the 'template' keyword does not refer to a …
modocache Sep 25, 2018
658386a
Make USE_IDEEP work again (#12026)
Sep 25, 2018
90bcf41
Add safety asserts for methods on TensorImpl which don't work on Vari…
ezyang Sep 26, 2018
28dba2f
Unify all *_EXPORT and *_IMPORT macros across c++ backend (#12019)
Yangqing Sep 26, 2018
db2f7de
Fallback CreateMutex/AtomicIter operators for mkl-dnn
PenghuiCheng Sep 26, 2018
807de9a
fix segfault when grad to a hook fn is None (#12028)
weiyangfb Sep 26, 2018
8ff435c
Use tempfile during serialized test comparison (#12021)
ajyu Sep 26, 2018
b7ebc00
Move Blob to ATen/core (#11924)
smessmer Sep 26, 2018
65cbb82
IValue can store Blob (#11414)
smessmer Sep 26, 2018
21ed7e5
Blob doesn't allow access to destroyCall anymore (#11548)
smessmer Sep 26, 2018
c8a0b11
add autodiff expressions for common operations (#11832)
zou3519 Sep 26, 2018
02d7c88
Unify versions across setup.py, libtorch, and libcaffe2 (#12053)
orionr Sep 26, 2018
b535aec
Fix warnings emitted when testing distributions (#12038)
vishwakftw Sep 26, 2018
18f9c07
Enable tracing of tensor factories with an out argument
apaszke Sep 26, 2018
1ee79d7
Merge remote-tracking branch 'rocm_upstream/upstream' into ifu
iotamudelta Sep 26, 2018
44a17a0
I believe we need to export these API for ROCm.
iotamudelta Sep 26, 2018
83cf9eb
Do not ifdef this out either.
iotamudelta Sep 26, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
31 changes: 21 additions & 10 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,10 @@ cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
# ---[ Project and semantic versioning.
project(Caffe2 CXX C)

set(CAFFE2_VERSION_MAJOR 0)
set(CAFFE2_VERSION_MINOR 8)
set(CAFFE2_VERSION_PATCH 2)
set(CAFFE2_VERSION
"${CAFFE2_VERSION_MAJOR}.${CAFFE2_VERSION_MINOR}.${CAFFE2_VERSION_PATCH}")
set(CMAKE_CXX_STANDARD 11)
if (NOT MSVC)
set(CMAKE_C_STANDARD 11)
endif()

# One variable that determines whether the current cmake process is being run
# with the main Caffe2 library. This is useful for building modules - if
Expand Down Expand Up @@ -134,6 +133,22 @@ if (ANDROID OR IOS)
set(BUILD_ATEN_MOBILE ON)
endif()

# ---[ Utils
# TODO: merge the following 3 files into cmake/public/utils.cmake.
include(cmake/Utils.cmake)
include(cmake/public/utils.cmake)

# ---[ Version numbers for generated libraries
set(TORCH_DEFAULT_VERSION "1.0.0")
set(TORCH_BUILD_VERSION "${TORCH_DEFAULT_VERSION}" CACHE STRING "Torch build version")
if (NOT TORCH_BUILD_VERSION)
# An empty string was specified so force version to the default
set(TORCH_BUILD_VERSION "${TORCH_DEFAULT_VERSION}"
CACHE STRING "Torch build version" FORCE)
endif()
caffe2_parse_version_str(TORCH ${TORCH_BUILD_VERSION})
caffe2_parse_version_str(CAFFE2 ${TORCH_BUILD_VERSION})

# ---[ CMake scripts + modules
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake/Modules)

Expand All @@ -160,11 +175,6 @@ include(cmake/MiscCheck.cmake)
# External projects
include(ExternalProject)

# ---[ Utils
# TODO: merge the following 3 files into cmake/public/utils.cmake.
include(cmake/Utils.cmake)
include(cmake/public/utils.cmake)

# ---[ Dependencies
include(cmake/Dependencies.cmake)

Expand Down Expand Up @@ -294,6 +304,7 @@ include_directories(BEFORE ${PROJECT_BINARY_DIR})
include_directories(BEFORE ${PROJECT_SOURCE_DIR}/aten/src/)

# ---[ Main build
add_subdirectory(c10)
add_subdirectory(caffe2)

# --[ Documentation
Expand Down
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,9 +262,9 @@ than Linux, which are worth keeping in mind when fixing these problems.
1. Symbols are NOT exported by default on Windows; instead, you have to explicitly
mark a symbol as exported/imported in a header file with `__declspec(dllexport)` /
`__declspec(dllimport)`. We have codified this pattern into a set of macros
which follow the convention `*_API`, e.g., `AT_API` inside ATen. (Every separate
shared library needs a unique macro name, because symbol visibility is on a per
shared library basis.)
which follow the convention `*_API`, e.g., `CAFFE2_API` inside Caffe2 and ATen.
(Every separate shared library needs a unique macro name, because symbol visibility
is on a per shared library basis. See c10/macros/Macros.h for more details.)

The upshot is if you see an "unresolved external" error in your Windows build, this
is probably because you forgot to mark a function with `*_API`. However, there is
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing
conda install -c mingfeima mkldnn

# Add LAPACK support for the GPU
conda install -c pytorch magma-cuda80 # or magma-cuda90 if CUDA 9
conda install -c pytorch magma-cuda92 # or [magma-cuda80 | magma-cuda91] depending on your cuda version
```

On macOS
Expand Down
8 changes: 4 additions & 4 deletions aten/src/ATen/CPUGeneral.h
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
#pragma once

// Using AT_API is crucial as otherwise you'll see
// Using CAFFE2_API is crucial as otherwise you'll see
// linking errors using MSVC
// See https://msdn.microsoft.com/en-us/library/a90k134d.aspx
// This header adds this if using AT_API
// This header adds this if using CAFFE2_API
#include "ATen/core/ATenGeneral.h"

namespace at {
AT_API void set_num_threads(int);
AT_API int get_num_threads();
CAFFE2_API void set_num_threads(int);
CAFFE2_API int get_num_threads();
}
2 changes: 1 addition & 1 deletion aten/src/ATen/CPUTypeDefault.h
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

namespace at {

struct AT_API CPUTypeDefault : public TypeDefault {
struct CAFFE2_API CPUTypeDefault : public TypeDefault {
CPUTypeDefault(TensorTypeId type_id, bool is_variable, bool is_undefined)
: TypeDefault(type_id, is_variable, is_undefined) {}
Allocator* allocator() const override;
Expand Down
16 changes: 8 additions & 8 deletions aten/src/ATen/Context.h
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@

namespace at {

struct Tensor;
class Tensor;

class AT_API Context {
public:
class CAFFE2_API Context {
public:
Context();
TypeExtendedInterface* getNonVariableTypeRaw(Backend p, ScalarType s) {
return static_cast<TypeExtendedInterface*>(globalLegacyTypeDispatch().getNonVariableTypeRaw(p, s));
Expand Down Expand Up @@ -133,7 +133,7 @@ class AT_API Context {
friend struct Type;
};

AT_API Context & globalContext();
CAFFE2_API Context& globalContext();

static inline void init() {
globalContext();
Expand All @@ -153,11 +153,11 @@ static inline TypeExtendedInterface& getNonVariableType(DeviceType p, ScalarType
return globalContext().getNonVariableType(deviceTypeToBackend(p), s);
}

AT_API TypeExtendedInterface& getType(TensorOptions options);
AT_API TypeExtendedInterface& getType(const TensorImpl*);
AT_API TypeExtendedInterface& getType(const Tensor&);
CAFFE2_API TypeExtendedInterface& getType(TensorOptions options);
CAFFE2_API TypeExtendedInterface& getType(const TensorImpl*);
CAFFE2_API TypeExtendedInterface& getType(const Tensor&);

AT_API Allocator* getCPUAllocator();
CAFFE2_API Allocator* getCPUAllocator();

static inline TypeExtendedInterface& CPU(ScalarType s) {
return getNonVariableType(Backend::CPU, s);
Expand Down
6 changes: 3 additions & 3 deletions aten/src/ATen/DLConvertor.h
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@

namespace at {

AT_API ScalarType toScalarType(const DLDataType& dtype);
AT_API DLManagedTensor * toDLPack(const Tensor& src);
AT_API Tensor fromDLPack(const DLManagedTensor* src);
CAFFE2_API ScalarType toScalarType(const DLDataType& dtype);
CAFFE2_API DLManagedTensor* toDLPack(const Tensor& src);
CAFFE2_API Tensor fromDLPack(const DLManagedTensor* src);

} //namespace at
9 changes: 6 additions & 3 deletions aten/src/ATen/ExpandUtils.h
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,12 @@

namespace at {

AT_API std::vector<int64_t> infer_size(IntList a, IntList b);
AT_API std::tuple<std::vector<int64_t>, std::vector<int64_t> > inferExpandGeometry(
IntList tensor_sizes, IntList tensor_strides, IntList sizes);
CAFFE2_API std::vector<int64_t> infer_size(IntList a, IntList b);
CAFFE2_API std::tuple<std::vector<int64_t>, std::vector<int64_t>>
inferExpandGeometry(
IntList tensor_sizes,
IntList tensor_strides,
IntList sizes);

// avoid copy-construction of Tensor by using a reference_wrapper.
inline void check_defined(std::initializer_list<std::reference_wrapper<const Tensor>> tensors, const char *api_name) {
Expand Down
6 changes: 3 additions & 3 deletions aten/src/ATen/SparseTensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
#include "ATen/core/Error.h"

namespace at {
struct AT_API SparseTensorImpl : public TensorImpl {
struct CAFFE2_API SparseTensorImpl : public TensorImpl {
// Stored in COO format, indices + values.

// INVARIANTS:
Expand Down Expand Up @@ -157,11 +157,11 @@ struct AT_API SparseTensorImpl : public TensorImpl {
sparseDims_ = sparseDims;
denseDims_ = denseDims;

auto empty_indices = indices().type().tensor({sparseDims, 0});
auto empty_indices = at::empty({sparseDims, 0}, indices().options());
std::vector<int64_t> values_size = {0};
auto dense_size = sizes().slice(sparseDims);
values_size.insert(values_size.end(), dense_size.begin(), dense_size.end());
auto empty_values = values().type().tensor(values_size);
auto empty_values = at::empty(values_size, values().options());
set_indices_and_values_unsafe(empty_indices, empty_values);
refresh_numel();
}
Expand Down
4 changes: 0 additions & 4 deletions aten/src/ATen/TensorGeometry.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,4 @@ bool TensorGeometry::is_contiguous() const {
return at::geometry_is_contiguous(sizes_, strides_);
}

Tensor TensorGeometry::zeros_with_stride(const Type& type) const {
return type.tensor(sizes_, strides_).zero_();
}

} // namespace at
5 changes: 1 addition & 4 deletions aten/src/ATen/TensorGeometry.h
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

namespace at {

struct AT_API TensorGeometry {
struct CAFFE2_API TensorGeometry {
TensorGeometry() : storage_offset_(0) {}

explicit TensorGeometry(IntList sizes)
Expand All @@ -30,9 +30,6 @@ struct AT_API TensorGeometry {
// true if the tensor is contiguous
bool is_contiguous() const;

// creates a new tensor with the sizes and strides of the source
Tensor zeros_with_stride(const Type& type) const;

int64_t dim() const { return sizes_.size(); }
int64_t size(int64_t dim) const {
dim = maybe_wrap_dim(dim, this->dim());
Expand Down
6 changes: 3 additions & 3 deletions aten/src/ATen/TensorOperators.h
Original file line number Diff line number Diff line change
Expand Up @@ -68,9 +68,9 @@ inline Tensor Tensor::operator[](int64_t index) const {
#define AT_FORALL_BINARY_OPS(_) \
_(+,x.add(y), y.add(x)) \
_(*,x.mul(y), y.mul(x)) \
_(-,x.sub(y), y.type().tensor().resize_(y.sizes()).fill_(x).sub_(y)) \
_(/,x.div(y), y.type().tensor().resize_(y.sizes()).fill_(x).div_(y)) \
_(%,x.remainder(y), y.type().tensor().resize_(y.sizes()).fill_(x).remainder_(y)) \
_(-,x.sub(y), ::at::empty(y.sizes(), y.options()).fill_(x).sub_(y)) \
_(/,x.div(y), ::at::empty(y.sizes(), y.options()).fill_(x).div_(y)) \
_(%,x.remainder(y), ::at::empty(y.sizes(), y.options()).fill_(x).remainder_(y)) \
_(<,x.lt(y), y.gt(x)) \
_(<=,x.le(y), y.ge(x)) \
_(>,x.gt(y),y.lt(x)) \
Expand Down
94 changes: 67 additions & 27 deletions aten/src/ATen/TensorUtils.h
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ namespace at {
// make sense. These are particularly useful for native functions,
// which do NO argument checking by default.

struct AT_API TensorArg {
struct CAFFE2_API TensorArg {
Tensor tensor;
const char* name;
int pos; // 1-indexed
Expand All @@ -22,7 +22,7 @@ struct AT_API TensorArg {
const Tensor& operator*() const { return tensor; }
};

struct AT_API TensorGeometryArg {
struct CAFFE2_API TensorGeometryArg {
TensorGeometry tensor;
const char* name;
int pos; // 1-indexed
Expand All @@ -49,40 +49,80 @@ using CheckedFrom = const char*;
// not TensorGeometryArg, because the Tensor to TensorGeometry
// conversion will blow up if you have undefined tensors.

AT_API std::ostream& operator<<(std::ostream & out, TensorGeometryArg t);
AT_API void checkDim(CheckedFrom c, const TensorGeometryArg& t, int64_t dim);
CAFFE2_API std::ostream& operator<<(std::ostream& out, TensorGeometryArg t);
CAFFE2_API void checkDim(
CheckedFrom c,
const TensorGeometryArg& t,
int64_t dim);
// NB: this is an inclusive-exclusive range
AT_API void checkDimRange(CheckedFrom c, const TensorGeometryArg& t, int64_t dim_start, int64_t dim_end);
AT_API void checkSameDim(CheckedFrom c, const TensorGeometryArg& t1, const TensorGeometryArg& t2);
AT_API void checkContiguous(CheckedFrom c, const TensorGeometryArg& t);
AT_API void checkAllContiguous(CheckedFrom c, at::ArrayRef<TensorArg> ts);
AT_API void checkSize(CheckedFrom c, const TensorGeometryArg& t, IntList sizes);
AT_API void checkSize(CheckedFrom c, const TensorGeometryArg& t, int64_t dim, int64_t size);
AT_API void checkNumel(CheckedFrom c, const TensorGeometryArg& t, int64_t numel);
AT_API void checkSameNumel(CheckedFrom c, const TensorGeometryArg& t1, const TensorGeometryArg& t2);
AT_API void checkAllSameNumel(CheckedFrom c, ArrayRef<TensorArg> tensors);
AT_API void checkScalarType(CheckedFrom c, const TensorArg& t, ScalarType s);
AT_API void checkScalarTypes(CheckedFrom c, const TensorArg& t, at::ArrayRef<ScalarType> l);
AT_API void checkSameGPU(CheckedFrom c, const TensorArg& t1, const TensorArg& t2);
AT_API void checkAllSameGPU(CheckedFrom c, ArrayRef<TensorArg> tensors);
AT_API void checkSameType(CheckedFrom c, const TensorArg& t1, const TensorArg& t2);
AT_API void checkAllSameType(CheckedFrom c, ArrayRef<TensorArg> tensors);
AT_API void checkSameSize(CheckedFrom c, const TensorArg& t1, const TensorArg& t2);
AT_API void checkDefined(CheckedFrom c, const TensorArg& t);
AT_API void checkAllDefined(CheckedFrom c, at::ArrayRef<TensorArg> t);
CAFFE2_API void checkDimRange(
CheckedFrom c,
const TensorGeometryArg& t,
int64_t dim_start,
int64_t dim_end);
CAFFE2_API void checkSameDim(
CheckedFrom c,
const TensorGeometryArg& t1,
const TensorGeometryArg& t2);
CAFFE2_API void checkContiguous(CheckedFrom c, const TensorGeometryArg& t);
CAFFE2_API void checkAllContiguous(CheckedFrom c, at::ArrayRef<TensorArg> ts);
CAFFE2_API void checkSize(
CheckedFrom c,
const TensorGeometryArg& t,
IntList sizes);
CAFFE2_API void checkSize(
CheckedFrom c,
const TensorGeometryArg& t,
int64_t dim,
int64_t size);
CAFFE2_API void checkNumel(
CheckedFrom c,
const TensorGeometryArg& t,
int64_t numel);
CAFFE2_API void checkSameNumel(
CheckedFrom c,
const TensorGeometryArg& t1,
const TensorGeometryArg& t2);
CAFFE2_API void checkAllSameNumel(CheckedFrom c, ArrayRef<TensorArg> tensors);
CAFFE2_API void checkScalarType(
CheckedFrom c,
const TensorArg& t,
ScalarType s);
CAFFE2_API void checkScalarTypes(
CheckedFrom c,
const TensorArg& t,
at::ArrayRef<ScalarType> l);
CAFFE2_API void checkSameGPU(
CheckedFrom c,
const TensorArg& t1,
const TensorArg& t2);
CAFFE2_API void checkAllSameGPU(CheckedFrom c, ArrayRef<TensorArg> tensors);
CAFFE2_API void checkSameType(
CheckedFrom c,
const TensorArg& t1,
const TensorArg& t2);
CAFFE2_API void checkAllSameType(CheckedFrom c, ArrayRef<TensorArg> tensors);
CAFFE2_API void checkSameSize(
CheckedFrom c,
const TensorArg& t1,
const TensorArg& t2);
CAFFE2_API void checkDefined(CheckedFrom c, const TensorArg& t);
CAFFE2_API void checkAllDefined(CheckedFrom c, at::ArrayRef<TensorArg> t);

// FixMe: does TensorArg slow things down?
AT_API void checkBackend(CheckedFrom c, at::ArrayRef<Tensor> t, at::Backend backend);
CAFFE2_API void checkBackend(
CheckedFrom c,
at::ArrayRef<Tensor> t,
at::Backend backend);

// Methods for getting data_ptr if tensor is defined
AT_API void * maybe_data_ptr(const Tensor& tensor);
AT_API void * maybe_data_ptr(const TensorArg& tensor);
CAFFE2_API void* maybe_data_ptr(const Tensor& tensor);
CAFFE2_API void* maybe_data_ptr(const TensorArg& tensor);

// Return if the tensor geometry represented by `sizes` and `strides` is contiguous
// Although we cache is_contiguous in tensor now, this is till useful because it
// allows checking if a particular geometry is contiguous without explicitly
// constructing a tensor, e.g., when you want to choose a kernel strategy based
// on whether a subgeometry is contiguous.
AT_API bool geometry_is_contiguous(IntList sizes, IntList strides);

CAFFE2_API bool geometry_is_contiguous(IntList sizes, IntList strides);
}
2 changes: 1 addition & 1 deletion aten/src/ATen/Utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@

namespace at {

AT_API int _crash_if_asan(int);
CAFFE2_API int _crash_if_asan(int);

static inline const Storage& checked_storage(
const Storage& expr,
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/core/ATenCoreTest.h
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,5 @@

namespace at {

AT_CORE_API int CoreTest();
CAFFE2_API int CoreTest();
}
5 changes: 0 additions & 5 deletions aten/src/ATen/core/ATenGeneral.h
Original file line number Diff line number Diff line change
@@ -1,8 +1,3 @@
#pragma once

#include "ATen/core/Macros.h"

// TODO: Merge the *_API macros.
#define AT_API AT_CORE_API
#define AT_EXPORT AT_CORE_EXPORT
#define AT_IMPORT AT_CORE_IMPORT
2 changes: 1 addition & 1 deletion aten/src/ATen/core/Allocator.h
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ struct Allocator {
}
};

struct AT_CORE_API InefficientStdFunctionContext {
struct CAFFE2_API InefficientStdFunctionContext {
std::unique_ptr<void, std::function<void(void*)>> ptr_;
InefficientStdFunctionContext(
std::unique_ptr<void, std::function<void(void*)>>&& ptr)
Expand Down
Loading