Skip to content

Integrate from upstream #242

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 28 commits into from
Oct 4, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
29e5ba8
Fix for LibTorch download link (#12263)
goldsborough Oct 2, 2018
7c67874
update the script to match the current build process
houseroad Oct 2, 2018
04b0774
Use caffe2::int8::Int8TensorCPU when input type is uint8_t (#12250)
jspark1105 Oct 2, 2018
035d042
Update onnx to onnx/onnx@ddf8eb6 (#12267)
bddppq Oct 2, 2018
a76216b
Back out "[aibench] Use caffe2::int8::Int8TensorCPU when input type i…
jspark1105 Oct 2, 2018
06360c3
Back out "Deduplicate canonical_axis_index_ with maybe_wrap_dim"
Oct 2, 2018
c0ed48a
Add support to the accuracy metric (#12211)
sf-wind Oct 3, 2018
1fb8925
Fix typo LMBD->LMDB in docs of setup.py (#12282)
daquexian Oct 3, 2018
080266e
Document CUDAHOSTCXX environment variable (#12265)
svenstaro Oct 3, 2018
b911ca9
docs: change links to https (#12258)
Ir1d Oct 3, 2018
a839ec8
Add move{Node,Edge,Subgraph} for Graph move-like semantics
bwasti Oct 3, 2018
c029c83
MIOpen 1.5 group conv API integration (#12273)
Oct 3, 2018
69ce472
Merge remote-tracking branch 'rocm_upstream/upstream' into ifu
iotamudelta Oct 3, 2018
d1ac1eb
Add `bool` type to IR (#11834)
Oct 3, 2018
01d835c
Revert D10128131: [nomnigraph] Add move{Node,Edge,Subgraph} for Graph…
bddppq Oct 3, 2018
3db9738
add torch factory methods (zeros/ones) to onnx symbolic
wanchaol Oct 3, 2018
2217c0b
create the onnx_root in local, and link it
houseroad Oct 3, 2018
b548f83
Reduce size of TensorImpl from 160 bytes to 128 bytes (#12266)
ezyang Oct 3, 2018
8aa2390
Make if block also take control_inputs, preserve SSA (#12224)
wanchaol Oct 3, 2018
fe10f3d
Fix up onnxwhile op (#12124)
wanchaol Oct 3, 2018
6b9afc8
pyHipify Fixes (#12292)
rohithkrn Oct 4, 2018
557015f
wipe cache with writes (#12279)
jspark1105 Oct 4, 2018
c9f9df0
Properly catch errors in PythonOps (#12243)
Oct 4, 2018
bcc2a05
Enable clang-tidy in CI (#12213)
goldsborough Oct 4, 2018
f688653
Merge remote-tracking branch 'rocm_upstream/upstream' into ifu
iotamudelta Oct 4, 2018
fa65dfc
Merge branch 'master' into ifu
iotamudelta Oct 4, 2018
37336d2
Fix mis-merge.
iotamudelta Oct 4, 2018
e9ab194
It's a flaky test for us - skip.
iotamudelta Oct 4, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 4 additions & 44 deletions .clang-tidy
Original file line number Diff line number Diff line change
@@ -1,51 +1,11 @@
---
# NOTE: there must be no spaces before the '-', so put the comma first.
Checks: '
*
,clang-analyzer-*
,modernize-*
,-cert-dcl21-cpp
,-cert-err58-cpp
,-cert-err60-cpp
,-clang-diagnostic-*
,-cppcoreguidelines-owning-memory
,-cppcoreguidelines-pro-bounds-array-to-pointer-decay
,-cppcoreguidelines-pro-bounds-constant-array-index
,-cppcoreguidelines-pro-type-member-init
,-cppcoreguidelines-pro-type-static-cast-downcast
,-cppcoreguidelines-pro-type-union-access
,-cppcoreguidelines-pro-type-vararg
,-cppcoreguidelines-special-member-functions
,-fuchsia-*
,-google-build-using-namespace
,-google-default-arguments
,-google-explicit-constructor
,-google-readability-braces-around-statements
,-google-readability-namespace-comments
,-google-readability-todo
,-google-runtime-references
,-google-runtime-references
,-hicpp-braces-around-statements
,-hicpp-explicit-conversions
,-hicpp-member-init
,-hicpp-no-array-decay
,-hicpp-signed-bitwise
,-hicpp-special-member-functions
,-hicpp-vararg
,-llvm-header-guard
,-llvm-include-order
,-llvm-namespace-comment
,-misc-unused-parameters
,-modernize-make-unique
,-modernize-use-default-member-init
,-performance-unnecessary-value-param
,-readability-braces-around-statements
,-readability-else-after-return
,-readability-implicit-bool-conversion
,-readability-named-parameter
-*
,modernize-deprecated-headers
'
WarningsAsErrors: ''
HeaderFilterRegex: 'torch/csrc/'
WarningsAsErrors: '*'
HeaderFilterRegex: 'torch/csrc/.*'
AnalyzeTemporaryDtors: false
CheckOptions:
...
9 changes: 9 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,12 @@ matrix:
- env: CPP_DOC_CHECK
install: sudo apt-get install -y doxygen
script: cd docs/cpp/source && ./check-doxygen.sh
- env: CLANG_TIDY
python: "3.6"
addons:
apt:
sources:
- ubuntu-toolchain-r-test
- llvm-toolchain-trusty-6.0
packages: clang-tidy-6.0
script: tools/run-clang-tidy-in-ci.sh -e "$(which clang-tidy-6.0)"
2 changes: 2 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ if (NOT MSVC)
set(CMAKE_C_STANDARD 11)
endif()

set(CMAKE_EXPORT_COMPILE_COMMANDS ON)

# One variable that determines whether the current cmake process is being run
# with the main Caffe2 library. This is useful for building modules - if
# modules are built with the main Caffe2 library then one does not need to do
Expand Down
30 changes: 30 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -354,6 +354,36 @@ static_assert(std::is_same(A*, decltype(A::singelton()))::value, "hmm");
are too large. Splitting such files into separate files helps.
(Example: `THTensorMath`, `THTensorMoreMath`, `THTensorEvenMoreMath`.)

### Running Clang-Tidy

[Clang-Tidy](https://clang.llvm.org/extra/clang-tidy/index.html) is a C++
linter and static analysis tool based on the clang compiler. We run clang-tidy
in our CI to make sure that new C++ code is safe, sane and efficient. See our
[.travis.yml](https://github.com/pytorch/pytorch/blob/master/.travis.yml) file
for the simple commands we use for this.

To run clang-tidy locally, follow these steps:

1. Install clang-tidy. First, check if you already have clang-tidy by simply
writing `clang-tidy` in your terminal. If you don't yet have clang-tidy, you
should be able to install it easily with your package manager, e.g. by writing
`apt-get install clang-tidy` on Ubuntu. See https://apt.llvm.org for details on
how to install the latest version. Note that newer versions of clang-tidy will
have more checks than older versions. In our CI, we run clang-tidy-6.0.

2. Use our driver script to run clang-tidy over any changes relative to some
git revision (you may want to replace `HEAD~1` with `HEAD` to pick up
uncommitted changes). Changes are picked up based on a `git diff` with the
given revision:
```sh
$ python tools/clang_tidy.py -d build -p torch/csrc -r HEAD~1
```

Above, it is assumed you are in the PyTorch root folder. `path/to/build` should
be the path to where you built PyTorch from source, e.g. `build` in the PyTorch
root folder if you used `setup.py build`. You can use `-c <clang-tidy-binary>`
to change the clang-tidy this script uses.

## Caffe2 notes

In 2018, we merged Caffe2 into the PyTorch source repository. While the
Expand Down
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ change the way your network behaves arbitrarily with zero lag or overhead. Our i
from several research papers on this topic, as well as current and past work such as
[torch-autograd](https://github.com/twitter/torch-autograd),
[autograd](https://github.com/HIPS/autograd),
[Chainer](http://chainer.org), etc.
[Chainer](https://chainer.org), etc.

While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.
You get the best of speed and flexibility for your crazy research.
Expand Down Expand Up @@ -121,18 +121,18 @@ Writing new neural network modules, or interfacing with PyTorch's Tensor API was
and with minimal abstractions.

You can write new neural network layers in Python using the torch API
[or your favorite NumPy-based libraries such as SciPy](http://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).
[or your favorite NumPy-based libraries such as SciPy](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).

If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate.
There is no wrapper code that needs to be written. You can see [a tutorial here](http://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).
There is no wrapper code that needs to be written. You can see [a tutorial here](https://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).


## Installation

### Binaries
Commands to install from binaries via Conda or pip wheels are on our website:

[http://pytorch.org](http://pytorch.org)
[https://pytorch.org](https://pytorch.org)

### From Source

Expand Down Expand Up @@ -239,21 +239,21 @@ You can then build the documentation by running ``make <format>`` from the
### Previous Versions

Installation instructions and binaries for previous PyTorch versions may be found
on [our website](http://pytorch.org/previous-versions).
on [our website](https://pytorch.org/previous-versions).


## Getting Started

Three pointers to get you started:
- [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/)
- [Examples: easy to understand pytorch code across all domains](https://github.com/pytorch/examples)
- [The API Reference](http://pytorch.org/docs/)
- [The API Reference](https://pytorch.org/docs/)

## Communication
* forums: discuss implementations, research, etc. http://discuss.pytorch.org
* forums: discuss implementations, research, etc. https://discuss.pytorch.org
* GitHub issues: bug reports, feature requests, install issues, RFCs, thoughts, etc.
* Slack: general chat, online discussions, collaboration etc. https://pytorch.slack.com/ . Our slack channel is invite-only to promote a healthy balance between power-users and beginners. If you need a slack invite, ping us at [email protected]
* newsletter: no-noise, one-way email newsletter with important announcements about pytorch. You can sign-up here: http://eepurl.com/cbG0rv
* newsletter: no-noise, one-way email newsletter with important announcements about pytorch. You can sign-up here: https://eepurl.com/cbG0rv

## Releases and Contributing

Expand Down
19 changes: 10 additions & 9 deletions aten/src/ATen/core/TensorImpl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -31,31 +31,32 @@ TensorImpl::TensorImpl(Storage&& storage, TensorTypeId type_id, bool is_variable

TensorImpl::TensorImpl(Storage&& storage, TensorTypeId type_id, const caffe2::TypeMeta& data_type, bool is_variable)
: storage_(std::move(storage)),
storage_offset_(0),
sizes_{0},
strides_{1},
is_contiguous_(true),
storage_offset_(0),
numel_(0),
type_id_(type_id),
data_type_(data_type),
is_variable_(is_variable) {}
type_id_(type_id),
is_variable_(is_variable) {
strides_.reset(new int64_t[1]);
strides_[0] = 1;
}

IntList TensorImpl::sizes() const {
return sizes_;
}

IntList TensorImpl::strides() const {
AT_ASSERTM(strides_.size() == sizes_.size(),
AT_ASSERTM(strides_,
"Caffe2 tensors don't (yet) have meaningful strides and cannot "
"be used in PyTorch.");
return strides_;
return IntList{strides_.get(), sizes_.size()};
}

bool TensorImpl::compute_contiguous() const {
bool is_contiguous = true;
if (is_empty())
return is_contiguous;
if (strides_.empty()) {
if (!strides_) {
// Special case for Caffe2 tensors which don't have strides set.
return true;
}
Expand Down Expand Up @@ -89,7 +90,7 @@ int64_t TensorImpl::size(int64_t d) const {
}

int64_t TensorImpl::stride(int64_t d) const {
AT_ASSERTM(strides_.size() == sizes_.size(),
AT_ASSERTM(strides_,
"Caffe2 tensors don't (yet) have meaningful strides and cannot "
"be used in PyTorch.");
d = at::maybe_wrap_dim(d, dim(), false);
Expand Down
89 changes: 58 additions & 31 deletions aten/src/ATen/core/TensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@
#include "ATen/core/LegacyTypeDispatch.h"
#include "ATen/core/Backend.h"
#include "ATen/core/context_base.h"
#include "ATen/core/WrapDimMinimal.h"

#include "caffe2/core/allocator.h"
#include "caffe2/core/common.h"
Expand Down Expand Up @@ -90,6 +89,16 @@ inline int64_t size_between_dim_(int k, int l, IntList dims) {
return r;
}

// Wrap around axis_index if it is negative, s.t., -1 is the last dim
inline int canonical_axis_index_(int axis_index, int ndims) {
CAFFE_ENFORCE_GE(axis_index, -ndims);
CAFFE_ENFORCE_LT(axis_index, ndims);
if (axis_index < 0) {
return axis_index + ndims;
}
return axis_index;
}

/**
* The low-level representation of a tensor, which contains a storage
* (which contains the actual data) and metadata (e.g., sizes and strides)
Expand Down Expand Up @@ -218,11 +227,6 @@ struct CAFFE2_API TensorImpl : public c10::intrusive_ptr_target {
virtual Tensor& grad();
virtual const Tensor& grad() const;

// TODO: make these protected
// Note: storage->size() may be greater than the recorded size
// of a tensor
at::Storage storage_;

template <typename T>
inline T * data() const {
AT_ASSERT(!is_variable());
Expand Down Expand Up @@ -275,8 +279,17 @@ struct CAFFE2_API TensorImpl : public c10::intrusive_ptr_target {
virtual void resize_dim(int64_t ndim) {
// NB: This is *truly* a resize; calling code (e.g., squeeze)
// assumes that old values are preserved
auto old_dim = sizes_.size();
sizes_.resize(ndim);
strides_.resize(ndim);
auto new_strides = c10::guts::make_unique<int64_t[]>(ndim);
for (size_t i = 0; i < std::min(old_dim, static_cast<size_t>(ndim)); i++) {
new_strides[i] = strides_[i];
}
for (size_t i = old_dim; i < static_cast<size_t>(ndim); i++) {
// If ndim < old_dim, this loop never executes
new_strides[i] = 0;
}
strides_ = std::move(new_strides);
refresh_numel();
refresh_contiguous();
}
Expand All @@ -288,7 +301,9 @@ struct CAFFE2_API TensorImpl : public c10::intrusive_ptr_target {
}

virtual void set_stride(int64_t dim, int64_t new_stride) {
strides_.at(dim) = new_stride;
AT_ASSERTM(strides_, "Caffe2 tensors don't have meaningful strides and "
"cannot be used in PyTorch");
strides_[dim] = new_stride;
refresh_numel();
refresh_contiguous();
}
Expand All @@ -311,8 +326,14 @@ struct CAFFE2_API TensorImpl : public c10::intrusive_ptr_target {
") must match dimensionality of strides (",
new_stride.size(),
")");
auto old_dim = sizes_.size();
sizes_ = new_size.vec();
strides_ = new_stride.vec();
if (old_dim != sizes_.size()) {
strides_.reset(new int64_t[sizes_.size()]);
}
for (size_t i = 0; i < sizes_.size(); i++) {
strides_[i] = new_stride[i];
}
refresh_numel();
refresh_contiguous();
}
Expand All @@ -323,13 +344,6 @@ struct CAFFE2_API TensorImpl : public c10::intrusive_ptr_target {
bool is_variable() const { return is_variable_; };

private:
int64_t storage_offset_ = 0;
std::vector<int64_t> sizes_;
std::vector<int64_t> strides_;

bool is_contiguous_ = true;
int64_t numel_ = -1;

int64_t compute_numel() const {
int64_t n = 1;
for (auto s : sizes()) {
Expand All @@ -348,12 +362,6 @@ struct CAFFE2_API TensorImpl : public c10::intrusive_ptr_target {
AT_ASSERT(!is_variable());
is_contiguous_ = compute_contiguous();
}
TensorTypeId type_id_;
// INVARIANT: When storage is non-null, this type meta must
// agree with the type meta in storage
caffe2::TypeMeta data_type_;
bool is_variable_ = false;
bool is_wrapped_number_ = false;

private:
TensorImpl(Storage&& storage, TensorTypeId type_id, const caffe2::TypeMeta& data_type, bool is_variable);
Expand Down Expand Up @@ -399,7 +407,7 @@ struct CAFFE2_API TensorImpl : public c10::intrusive_ptr_target {
if (src.numel() == -1) {
sizes_.clear();
numel_ = -1;
strides_.clear();
strides_.reset();
is_contiguous_ = true;
storage_.reset();
data_type_ = caffe2::TypeMeta();
Expand Down Expand Up @@ -785,13 +793,6 @@ struct CAFFE2_API TensorImpl : public c10::intrusive_ptr_target {
return sizes_;
}

protected:
// we decide to keep reserved_ and it will
// live in Tensor after the split
// The logic is that if Extend() or ReserveSpace() were ever called,
// then subsequent Resize()s will not free up Storage.
bool reserved_ = false;

private:
template <
typename T,
Expand Down Expand Up @@ -864,9 +865,35 @@ struct CAFFE2_API TensorImpl : public c10::intrusive_ptr_target {
}

inline void update_to_contiguous_strides() {
strides_.resize(0);
strides_.reset();
is_contiguous_ = true;
}

public:
at::Storage storage_; // TODO: Fix visibility on me

protected:
std::vector<int64_t> sizes_;
std::unique_ptr<int64_t[]> strides_; // this saves two words

int64_t storage_offset_ = 0;
int64_t numel_ = -1;

// INVARIANT: When storage is non-null, this type meta must
// agree with the type meta in storage
caffe2::TypeMeta data_type_;

// You get to have eight byte-size fields here, before you
// should pack this into a bitfield.
TensorTypeId type_id_;
bool is_contiguous_ = true;
bool is_variable_ = false;
bool is_wrapped_number_ = false;
// we decide to keep reserved_ and it will
// live in Tensor after the split
// The logic is that if Extend() or ReserveSpace() were ever called,
// then subsequent Resize()s will not free up Storage.
bool reserved_ = false;

};
} // namespace at
Loading