Skip to content

Merge from upstream #164

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 43 commits into from
Sep 2, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
7169906
torch.digamma (#10967)
zou3519 Aug 29, 2018
b41988c
Cleanup BUILD_DOCS cmake section (#11000)
orionr Aug 29, 2018
a9469c9
Fill eigenvector with zeros if not required (#10645)
Aug 29, 2018
1b0d5e6
Get rid of some unnecessary includes of Context. (#10951)
gchanan Aug 29, 2018
562fc76
Add test cases for ONNX unsqueeze (#10924)
houseroad Aug 29, 2018
206d52d
Disable smart_tensor_printer_test without glog (#10999)
orionr Aug 29, 2018
e0dbb91
Windows raw string fix (#10998)
mingzhe09088 Aug 29, 2018
525548f
Move SparseTensorRef to core, change some includes to core.
gchanan Aug 29, 2018
396dec0
s/spaerse/sparse (#10968)
zou3519 Aug 29, 2018
4e446b8
Make profiler.build_table() O(n) rather than O(n^2) (#10969)
zou3519 Aug 29, 2018
bed9d41
Generate Type::registerCPU as we do register_cuda_types. (#10947)
gchanan Aug 29, 2018
dbce1c8
exposing net_transformer_fun before add grad (#11003)
wat3rBro Aug 29, 2018
ec519e8
Reduce number of elements within test_abs
cpuhrsch Aug 29, 2018
fa7c81c
nomnigraph - nit - code style update (#10987)
duc0 Aug 29, 2018
56539f5
PT1 Distributed Release MileStone No.1 - Completed Distributed Packag…
teng-li Aug 29, 2018
c99a143
Update blackbox predictor with new constructor (#10920)
Aug 29, 2018
cd94163
Minor copy-edit on setup.py
ezyang Aug 29, 2018
b644d5e
Delete context and get_context from Type.
ezyang Aug 29, 2018
f687ff5
Delete unnecessary includes from TensorImpl.h (#11005)
ezyang Aug 29, 2018
e9eed8e
Add doc for Tensor.digamma_? (#11008)
ssnl Aug 29, 2018
0b1de74
Documentation improvement in caffe2/core/tensor.h (#11006)
ezyang Aug 29, 2018
6a8bc38
Add flush to logging messages higher than INFO. (#10983)
Yangqing Aug 29, 2018
22e3b2c
Revert D9413150: [New Checkpoint] Kill the dummy TaskOutput when task…
Aug 29, 2018
89834df
Add GPU version of HardSigmoid Op to Caffe2 (#10955)
Aug 29, 2018
c755616
Enable Detectron model inference for CPU and MKL-DNN paths (#10157)
Aug 29, 2018
d9b74f6
Make it possible to disable JIT using env variables (#10867)
apaszke Aug 29, 2018
6b87198
Devirtualize StorageImpl deconstructor (#11018)
cpuhrsch Aug 29, 2018
ef7fc2a
Remove at::StorageImpl::finalizer_ (#11022)
cpuhrsch Aug 29, 2018
98d85b1
Debugging help + test
bwasti Aug 29, 2018
2cc98d8
Adds `dim` argument to `torch.unique` (#10423)
Aug 29, 2018
c4e1adf
Remove THHalf type
Aug 29, 2018
ae635b1
Record tensor factory functions in trace (#10935)
zdevito Aug 30, 2018
91ecbf8
Remove TensorBase (#11036)
cpuhrsch Aug 30, 2018
e550eab
Remove MetaNetDef test case in Predictor (#11052)
Aug 30, 2018
394bdcd
Fix the build of aten tests when FULL_CAFFE2=1
houseroad Aug 30, 2018
16b8e0a
at::StorageImpl: Rename size_ to numel_ and elementSize() to itemsize()
cpuhrsch Aug 30, 2018
ad1670c
Kill the dummy TaskOutput when task.get_step() (#11048)
xush6528 Aug 30, 2018
23af7de
Add has_lapack flag (#11024)
ssnl Aug 30, 2018
dbc0004
Remove use_count() == 1 in Tensor::Extend (#11046)
tullie Aug 30, 2018
a8af7fe
Support import of `nn.RNNCellBase` in `__all__`
zuoxingdong Aug 30, 2018
e719543
Add benchmarking functionality to the benchmark app (#10976)
sf-wind Aug 30, 2018
535633b
Export MPI functions (#11037)
orionr Aug 30, 2018
c69f5da
Merge remote-tracking branch 'upstream/master' into ifu
iotamudelta Aug 30, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 19 additions & 6 deletions .jenkins/pytorch/build.sh
Original file line number Diff line number Diff line change
@@ -1,15 +1,28 @@
#!/bin/bash

# For distributed, four environmental configs:
# (1) build with only NCCL
# (2) build with NCCL and MPI
# (3) build with only MPI
# (4) build with neither
if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-* ]]; then
# TODO: move this to Docker
sudo apt-get update
sudo apt-get install libnccl-dev=2.2.13-1+cuda9.0 libnccl2=2.2.13-1+cuda9.0
fi

if [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda8-* ]] || [[ "$BUILD_ENVIRONMENT" == *-xenial-cuda9-cudnn7-py2* ]]; then
# TODO: move this to Docker
sudo apt-get update
sudo apt-get install openmpi-bin libopenmpi-dev
sudo apt-get install -y --no-install-recommends openssh-client openssh-server
sudo mkdir -p /var/run/sshd
fi

if [[ "$BUILD_ENVIRONMENT" == "pytorch-linux-xenial-py3-clang5-asan" ]]; then
exec "$(dirname "${BASH_SOURCE[0]}")/build-asan.sh" $*
fi

# TODO: move this to Docker
# TODO: add both NCCL and MPI in CI test by fixing these test first
sudo apt-get update
sudo apt-get install libnccl-dev libnccl2
# sudo apt-get install openmpi-bin libopenmpi-dev

# Required environment variable: $BUILD_ENVIRONMENT
# (This is set by default in the Docker images we build, so you don't
# need to set it yourself.
Expand Down
8 changes: 4 additions & 4 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ if(BUILD_DOCS)

if(EXISTS ${CMAKE_CURRENT_BINARY_DIR}/docs)
file(REMOVE_RECURSE ${CMAKE_CURRENT_BINARY_DIR}/docs)
endif (EXISTS ${CMAKE_CURRENT_BINARY_DIR}/docs)
endif()

file(MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/docs)
configure_file(${DOXYGEN_C_IN} ${DOXYGEN_C_OUT} @ONLY)
Expand All @@ -323,10 +323,10 @@ if(BUILD_DOCS)
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
COMMENT "Generating Python API documentation with Doxygen"
VERBATIM)
else (DOXYGEN_FOUND)
else()
message(FATAL_ERROR "Doxygen needs to be installed to generate the documentation")
endif (DOXYGEN_FOUND)
endif (BUILD_DOCS)
endif()
endif()

# ---[ CMake related files
# Uninistall option.
Expand Down
13 changes: 12 additions & 1 deletion aten/src/ATen/Context.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@
#include <stdexcept>

#include "ATen/CPUGenerator.h"
#include "ATen/RegisterCPU.h"

#include "TH/TH.h" // for USE_LAPACK

#ifdef USE_SSE3
#include <pmmintrin.h>
Expand All @@ -34,7 +37,7 @@ Context::Context()

generator_registry[static_cast<int>(DeviceType::CPU)]
.reset(new CPUGenerator(this));
Type::registerCPU(this);
register_cpu_types(this);
}

// TODO: This could be bad juju if someone calls globalContext() in the
Expand Down Expand Up @@ -79,6 +82,14 @@ bool Context::hasMKL() const {
#endif
}

bool Context::hasLAPACK() const {
#ifdef USE_LAPACK
return true;
#else
return false;
#endif
}

bool Context::setFlushDenormal(bool on) {
#ifdef USE_SSE3
// Setting flush-to-zero (FTZ) flag
Expand Down
13 changes: 13 additions & 0 deletions aten/src/ATen/Context.h
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,10 @@ class AT_API Context {
return *generator;
}
bool hasMKL() const;
bool hasLAPACK() const;
bool hasMAGMA() const {
return detail::getCUDAHooks().hasMAGMA();
}
bool hasCUDA() const {
return detail::getCUDAHooks().hasCUDA();
}
Expand Down Expand Up @@ -114,6 +118,7 @@ class AT_API Context {
std::atomic<size_t> next_id;
std::unique_ptr<THCState, void(*)(THCState*)> thc_state;
friend struct Type;
friend void register_cpu_types(Context * context);
friend void register_cuda_types(Context * context);
};

Expand Down Expand Up @@ -157,6 +162,14 @@ static inline bool hasMKL() {
return globalContext().hasMKL();
}

static inline bool hasLAPACK() {
return globalContext().hasLAPACK();
}

static inline bool hasMAGMA() {
return globalContext().hasMAGMA();
}

static inline int64_t current_device() {
return globalContext().current_device();
}
Expand Down
4 changes: 2 additions & 2 deletions aten/src/ATen/DeviceGuard.h
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#pragma once

#include <ATen/Device.h>
#include <ATen/ScalarType.h>
#include <ATen/core/Device.h>
#include <ATen/core/ScalarType.h>
#include <ATen/Tensor.h>
#include <ATen/core/Error.h>
#include <ATen/detail/CUDAHooksInterface.h>
Expand Down
1 change: 0 additions & 1 deletion aten/src/ATen/Formatting.cpp
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
#include "ATen/Formatting.h"
#include "ATen/Tensor.h"
#include "ATen/Context.h"
#include "ATen/TensorMethods.h"

#include <cmath>
Expand Down
4 changes: 2 additions & 2 deletions aten/src/ATen/Storage.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ struct AT_API Storage {
template <typename T>
T* unsafe_data() const { return storage_impl_->unsafe_data<T>(); }

size_t elementSize() const { return storage_impl_->elementSize(); }
ptrdiff_t size() const { return storage_impl_->size(); }
size_t elementSize() const { return storage_impl_->itemsize(); }
ptrdiff_t size() const { return storage_impl_->numel(); }
bool resizable() const { return storage_impl_->resizable(); }
// get() use here is to get const-correctness
void* data() const { return storage_impl_.get()->data(); }
Expand Down
14 changes: 6 additions & 8 deletions aten/src/ATen/StorageImpl.cpp
Original file line number Diff line number Diff line change
@@ -1,31 +1,29 @@
#include <ATen/Context.h>
#include <ATen/StorageImpl.h>

namespace at {

StorageImpl::StorageImpl(
at::DataType data_type,
ptrdiff_t size,
int64_t numel,
at::DataPtr data_ptr,
at::Allocator* allocator,
bool resizable)
: data_type_(data_type),
data_ptr_(std::move(data_ptr)),
size_(size),
numel_(numel),
resizable_(resizable),
allocator_(allocator),
finalizer_(nullptr) {}
allocator_(allocator) {}

StorageImpl::StorageImpl(
at::DataType data_type,
ptrdiff_t size,
int64_t numel,
at::Allocator* allocator,
bool resizable)
: StorageImpl(
data_type,
size,
numel,
allocator->allocate(
at::elementSize(dataTypeToScalarType(data_type)) * size),
at::elementSize(dataTypeToScalarType(data_type)) * numel),
allocator,
resizable) {}

Expand Down
27 changes: 10 additions & 17 deletions aten/src/ATen/StorageImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
#include <ATen/Allocator.h>
#include <ATen/ScalarType.h>
#include <ATen/ScalarTypeUtils.h>
#include <TH/THTypeConversion.hpp>

#include <ATen/core/intrusive_ptr.h>

Expand All @@ -21,16 +20,16 @@ struct Type;
struct AT_API StorageImpl : public c10::intrusive_ptr_target {
public:
StorageImpl() = delete;
virtual ~StorageImpl() {};
~StorageImpl() {};
StorageImpl(
at::DataType data_type,
ptrdiff_t size,
int64_t numel,
at::DataPtr data_ptr,
at::Allocator* allocator,
bool resizable);
StorageImpl(
at::DataType data_type,
ptrdiff_t size,
int64_t numel,
at::Allocator* allocator,
bool resizable);
StorageImpl(StorageImpl&) = delete;
Expand All @@ -44,7 +43,7 @@ struct AT_API StorageImpl : public c10::intrusive_ptr_target {
template <typename T>
inline T* data() const {
auto data_type_T =
at::scalarTypeToDataType(at::CTypeToScalarType<th::from_type<T>>::to());
at::scalarTypeToDataType(at::CTypeToScalarType<T>::to());
if (dtype() != data_type_T) {
AT_ERROR(
"Attempt to access StorageImpl having data type ",
Expand All @@ -61,27 +60,22 @@ struct AT_API StorageImpl : public c10::intrusive_ptr_target {
}

void release_resources() override {
if (finalizer_) {
(*finalizer_)();
}
finalizer_ = nullptr;
data_ptr_.clear();
}

void operator=(const StorageImpl&) = delete;

size_t elementSize() const {
size_t itemsize() const {
return at::elementSize(dataTypeToScalarType(data_type_));
}

Type& type();

// TODO: Rename to size() and size to size_
ptrdiff_t size() const {
return size_;
int64_t numel() const {
return numel_;
};
void set_size(ptrdiff_t size) {
size_ = size;
void set_numel(int64_t numel) {
numel_ = numel;
};
bool resizable() const {
return resizable_;
Expand Down Expand Up @@ -132,9 +126,8 @@ struct AT_API StorageImpl : public c10::intrusive_ptr_target {
private:
at::DataType data_type_;
at::DataPtr data_ptr_;
ptrdiff_t size_;
int64_t numel_;
bool resizable_;
at::Allocator* allocator_;
std::unique_ptr<THFinalizer> finalizer_;
};
} // namespace at
53 changes: 0 additions & 53 deletions aten/src/ATen/TensorBase.h

This file was deleted.

2 changes: 0 additions & 2 deletions aten/src/ATen/TensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
#include <atomic>
#include <memory>

#include "ATen/Retainable.h"
#include "ATen/StorageImpl.h"
#include "ATen/Storage.h"
#include "ATen/core/optional.h"
#include "ATen/core/TensorTypeId.h"
Expand Down
4 changes: 2 additions & 2 deletions aten/src/ATen/TensorOptions.h
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@

#include <ATen/core/Backend.h>
#include <ATen/Context.h>
#include <ATen/Device.h>
#include <ATen/core/Device.h>
#include <ATen/DeviceGuard.h>
#include <ATen/core/Layout.h>
#include <ATen/ScalarType.h>
#include <ATen/core/ScalarType.h>
#include <ATen/Tensor.h>
#include <ATen/Type.h>

Expand Down
1 change: 0 additions & 1 deletion aten/src/ATen/UndefinedTensor.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
#include "ATen/UndefinedTensor.h"
#include "ATen/Context.h"
#include "ATen/core/Error.h"

namespace at {
Expand Down
4 changes: 2 additions & 2 deletions aten/src/ATen/UndefinedType.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@

namespace at {

UndefinedType::UndefinedType(Context* context)
: Type(context, UndefinedTensorId(), /*is_variable=*/false, /*is_undefined=*/true) {}
UndefinedType::UndefinedType()
: Type(UndefinedTensorId(), /*is_variable=*/false, /*is_undefined=*/true) {}
ScalarType UndefinedType::scalarType() const {
return ScalarType::Undefined;
}
Expand Down
3 changes: 1 addition & 2 deletions aten/src/ATen/UndefinedType.h
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
#pragma once

#include "ATen/Type.h"
#include "ATen/Context.h"
#include "ATen/CheckGenerator.h"

#ifdef _MSC_VER
Expand All @@ -13,7 +12,7 @@
namespace at {

struct UndefinedType final : public Type {
explicit UndefinedType(Context* context);
explicit UndefinedType();
virtual ScalarType scalarType() const override;
virtual Backend backend() const override;
virtual bool is_cuda() const override;
Expand Down
File renamed without changes.
Loading