Skip to content

Integrate from upstream #238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 85 commits into from
Oct 3, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
85 commits
Select commit Hold shift + click to select a range
78fe149
Fix ONNX bug, add symbolic for full
apaszke Sep 26, 2018
c2f8f50
add narrow() support for sparse tensors re: #8853 (#11342)
realdoug Sep 26, 2018
d9c27f4
T33898723: Simple put operators for caffe2 stats (#12057)
BlueberryDS Sep 26, 2018
6ff568d
Add full namespace resolution in CAFFE_DURATION (#12065)
Sep 26, 2018
1b45f68
Use atomicAdd from cuda_fp16 header when building with CUDA 10 (#12108)
syed-ahmed Sep 26, 2018
75b1ae1
Update issue templates
JoelMarcey Sep 26, 2018
478803a
Introduce type variables to implement generic list operators (#12040)
zdevito Sep 26, 2018
db5f8d4
Remove TIndex typedef from core/common.h (#12032)
cpuhrsch Sep 26, 2018
0f81039
Better high level C++ documentation (#12079)
goldsborough Sep 27, 2018
3251012
Aten: catch2gtest (#11846)
zrphercule Sep 27, 2018
5da8a8c
Handle undefined tensor in blob correctly. (#12125)
ezyang Sep 27, 2018
383d340
Small optimization for adam (#12107)
jma127 Sep 27, 2018
9c49bb9
Move registry fully to c10 (#12077)
Yangqing Sep 27, 2018
a72603f
Fix for ppc64le jit graph difference in sigmoid backward, see #10726 …
avmgithub Sep 27, 2018
13cf392
Remove ATen/Error.h and use ATen/core/Error.h instead. (#12132)
Sep 27, 2018
6e7e63f
Implementation MomentumSGD/MomentumSGDUpdate operators for mkl-dnn (#…
PenghuiCheng Sep 27, 2018
80e3081
Add observers for mkldnn fallback operators (#9093)
wuhuikx Sep 27, 2018
c35f85a
Export symbols for pybind and other libs after caffe2 rebase (#11975)
gujinghui Sep 27, 2018
1619264
Make ATen-core and caffe2 mutually recursive / merge template data<T>…
ezyang Sep 28, 2018
f6abd16
Merge TensorImpl. (#11971)
ezyang Sep 28, 2018
e8cb6cb
Fix some symbolics for ReduceSum, GE, LE (#12123)
wanchaol Sep 28, 2018
c5fc2f1
Merge UndefinedTensorImpl.
ezyang Sep 28, 2018
6a2dbc9
Rename TensorImpl::GetDeviceType to device_type, and properly test if…
ezyang Sep 28, 2018
00c6fb1
Move ExtendTo to caffe2::Tensor from TensorImpl
ezyang Sep 28, 2018
dd73d57
Move TensorImpl::ShrinkTo to caffe2::Tensor (#12090)
ezyang Sep 28, 2018
d02478e
Move TensorImpl::ResizeLike to caffe2::Tensor
ezyang Sep 28, 2018
8c533c2
Fix bug where Reshape() trashes strides.
ezyang Sep 28, 2018
b0e48aa
Move TensorImpl::Reshape(vector<int>) to caffe2::Tensor
ezyang Sep 28, 2018
976a9e0
Move TensorImpl::DebugString() to caffe2::Tensor
ezyang Sep 28, 2018
2021b26
Move TensorImpl::ShareExternalPointer helper overloads to caffe2::Tensor
ezyang Sep 28, 2018
a86a61b
Implement caffe2::Tensor::raw_data() in terms of data()
ezyang Sep 28, 2018
a581804
Rewrite serialization to correctly handle partial reads/writes in all…
ezyang Sep 28, 2018
7f35e92
mutable lists (#10700)
suo Sep 28, 2018
149403f
Move TensorImpl ndim, size, itemsize and nbytes to caffe2::Tensor
ezyang Sep 28, 2018
3eb5940
codemod cuda_gpu_id to device_id (#12022)
bddppq Sep 28, 2018
bbae57d
Move TensorImpl size_from_dim, size_to_dim, size_between_dim, canonic…
ezyang Sep 28, 2018
f5a0c33
Move TensorImpl IsType, meta, dim32, dim, ExtractDeviceOption to caff…
ezyang Sep 28, 2018
04c0971
Special case BatchGather and BatchGatherGradient for block_size=1. (#…
nrsatish Sep 28, 2018
d291cf7
Ensuring positive definite matrix before constructing (#12102)
jeffreyksmithjr Sep 28, 2018
5be0bae
Use streams in JIT serialization, allow JIT serialization to/from buf…
lantiga Sep 28, 2018
b0248df
Docs: Change cuda(async) —> cuda(non_blocking) (#12158)
Sep 28, 2018
7ead1f1
Merge remote-tracking branch 'rocm_upstream/upstream' into ifu
iotamudelta Sep 28, 2018
0aff3cc
Fix broadcasting bug in StudentT (#12148)
fritzo Sep 28, 2018
65bf181
Add "ai.onnx.pytorch" onnx domain (#12157)
bddppq Sep 28, 2018
e7e10e6
Introduce builtin script functions (#12141)
zdevito Sep 28, 2018
8009b6c
Kill self_ty in TYPE_DERIVED_DEFINITION_NATIVE (#11903)
ssnl Sep 28, 2018
ab9a597
Disable inlinining of EnforceFailMessage (#12078)
aditya7fb Sep 28, 2018
0e779c2
Deduplicate canonical_axis_index_ with maybe_wrap_dim (#11891)
ezyang Sep 28, 2018
7b2c0a0
Adds support for NaN, +inf, -inf float scalars to CPU and CUDA fusers…
mruberry Sep 28, 2018
60061a2
Adding Declare and Export operators (#11954)
bwasti Sep 28, 2018
08e5ca1
Add filter<T>(NNModule) and explicit Declare/Export classes (#11955)
bwasti Sep 28, 2018
0a5dfa5
Add support for device annotations on blobs
bwasti Sep 28, 2018
ebc2643
Enable multiple external output (#10957)
jerryzh168 Sep 29, 2018
22ce606
Add caffe2_api to exported functions (#12184)
bwasti Sep 29, 2018
878e774
Turns optimizations off when checking trace (#12172)
mruberry Sep 29, 2018
a2ebbcc
fix unit tests on CI
iotamudelta Sep 29, 2018
40aa212
Support fp16 mkl engine in training
chocjy Sep 30, 2018
5ffc915
fix docs (#12126)
weiyangfb Sep 30, 2018
93ecf4d
Remove raise_from (#12185)
goldsborough Sep 30, 2018
572132f
copy_(Sparse, Sparse) for sparse tensor (#9005)
weiyangfb Sep 30, 2018
c3817e8
Temporary fix for LibTorch download link (#12212)
goldsborough Sep 30, 2018
9768b4d
support half float for SparseLengthsIndicesInGradientWeightedSumWithM…
Oct 1, 2018
f3c32a4
dnnlowp_16 -> dnnlowp_acc16 (#12205)
jspark1105 Oct 1, 2018
fed91f8
(Very small) allow trailing commas in assign or tuples (#11723)
Oct 1, 2018
006171f
Back out "[pytorch][PR] Revert "Move CreateContext to global registry…
jerryzh168 Oct 1, 2018
e43ffb0
nomnigraph - easy - some code cleanup for transformations_test (#12101)
duc0 Oct 1, 2018
7d7d336
Back out "codemod cuda_gpu_id to device_id"
Oct 1, 2018
ecb3835
change \gamma to \Gamma (#12214)
weiyangfb Oct 1, 2018
3010dc4
Revert D10123245: Back out "codemod cuda_gpu_id to device_id"
rratmansky Oct 1, 2018
eba1cf2
Unify style (#11949)
Oct 1, 2018
06f535d
More debug info in plan executor (#12183)
Oct 1, 2018
1b59cf8
Add support to use llvm 7 in CI
bddppq Oct 1, 2018
15d28e4
remove support for c extensions (#12122)
Oct 1, 2018
8fa7de3
Enable ROCM clang-7 build
bddppq Oct 1, 2018
35becd1
New version of PT1 model format (#12149)
houseroad Oct 1, 2018
23f86ad
Back out "[caffe2][mpscnn] Enable multiple external output"
jerryzh168 Oct 1, 2018
26df16e
Clear previous device option when keep_device is set in load op
bddppq Oct 2, 2018
ecace9e
Move crf in caffe2 from fb to oss (#12200)
seayoung1112 Oct 2, 2018
8af06d8
Use DFS scheduling only within single device (#11848)
Oct 2, 2018
2cbcaf4
Skip failing tests in test_sparse (#12229)
iotamudelta Oct 2, 2018
696498d
Delete stride updating logic from Caffe2, and make PyTorch error in t…
ezyang Oct 2, 2018
ff608a9
Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_i…
bddppq Oct 2, 2018
1d3f650
Revert D10098106: [pytorch][PR] [WIP] New version of PT1 model format
Oct 2, 2018
f0583cd
Merge remote-tracking branch 'rocm_upstream/upstream' into ifu
iotamudelta Oct 3, 2018
1edfd59
Skip failing test.
iotamudelta Oct 3, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions .github/ISSUE_TEMPLATE/bug-report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
name: "\U0001F41B Bug Report"
about: Submit a bug report to help us improve PyTorch

---

## 🐛 Bug

<!-- A clear and concise description of what the bug is. -->

## To Reproduce

Steps to reproduce the behavior:

1.
1.
1.

<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->

## Expected behavior

<!-- A clear and concise description of what you expected to happen. -->

## Environment

Please copy and paste the output from our
[environment collection script](https://github.com/raw/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).

You can get the script and run it with:
```
wget https://github.com/raw/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```

- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:

## Additional context

<!-- Add any other context about the problem here. -->
9 changes: 9 additions & 0 deletions .github/ISSUE_TEMPLATE/documentation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
name: "\U0001F4DA Documentation"
about: Report an issue related to https://pytorch.org/docs

---

## 📚 Documentation

<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
24 changes: 24 additions & 0 deletions .github/ISSUE_TEMPLATE/feature-request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
name: "\U0001F680Feature Request"
about: Submit a proposal/request for a new PyTorch feature

---

## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->

## Motivation

<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->

## Pitch

<!-- A clear and concise description of what you want to happen. -->

## Alternatives

<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->

## Additional context

<!-- Add any other context or screenshots about the feature request here. -->
13 changes: 13 additions & 0 deletions .github/ISSUE_TEMPLATE/questions-help-support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
name: "❓Questions/Help/Support"
about: Do you need support? We have resources.

---

## ❓ Questions and Help

### Please note that this issue tracker is not a help form and this issue will be closed.

We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:

- [Discussion Forum](https://discuss.pytorch.org/)
11 changes: 0 additions & 11 deletions .jenkins/pytorch/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -102,17 +102,6 @@ fi
# Add the test binaries so that they won't be git clean'ed away
git add -f build/bin

# Test C FFI plugins
# cffi install doesn't work for Python 3.7
if [[ "$BUILD_ENVIRONMENT" != *pynightly* ]]; then
# TODO: Don't run this here
pip install cffi
git clone https://github.com/pytorch/extension-ffi.git
pushd extension-ffi/script
python build.py
popd
fi

# Test documentation build
if [[ "$BUILD_ENVIRONMENT" == *xenial-cuda8-cudnn6-py3* ]]; then
pushd docs
Expand Down
4 changes: 2 additions & 2 deletions .jenkins/pytorch/enabled-configs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ pytorch-macos-10.13-cuda9.2-cudnn7-py3-build
pytorch-docker-build-test
short-perf-test-cpu
short-perf-test-gpu
py2-clang3.8-rocm1.7.1-ubuntu16.04-build
py2-clang3.8-rocm1.7.1-ubuntu16.04-test
py2-clang7-rocmdeb-ubuntu16.04-build
py2-clang7-rocmdeb-ubuntu16.04-test
pytorch-ppc64le-cuda9.2-cudnn7-py3-build
pytorch-ppc64le-cuda9.2-cudnn7-py3-test
pytorch-ppc64le-cuda9.1-cudnn7-py3-build
Expand Down
1 change: 1 addition & 0 deletions .jenkins/pytorch/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,7 @@ test_aten() {
SUDO=sudo
fi

${SUDO} ln -s "$TORCH_LIB_PATH"/libc10* build/bin
${SUDO} ln -s "$TORCH_LIB_PATH"/libcaffe2* build/bin
${SUDO} ln -s "$TORCH_LIB_PATH"/libnccl* build/bin

Expand Down
2 changes: 0 additions & 2 deletions aten/src/ATen/Registry.h

This file was deleted.

27 changes: 27 additions & 0 deletions aten/src/ATen/core/Half-inl.h
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,33 @@ inline AT_HOST_DEVICE Half operator/(int a, Half b) {
return static_cast<Half>(a) / b;
}

//// Arithmetic with longs
inline AT_HOST_DEVICE Half operator+(Half a, long b) {
return a + static_cast<Half>(b);
}
inline AT_HOST_DEVICE Half operator-(Half a, long b) {
return a - static_cast<Half>(b);
}
inline AT_HOST_DEVICE Half operator*(Half a, long b) {
return a * static_cast<Half>(b);
}
inline AT_HOST_DEVICE Half operator/(Half a, long b) {
return a / static_cast<Half>(b);
}

inline AT_HOST_DEVICE Half operator+(long a, Half b) {
return static_cast<Half>(a) + b;
}
inline AT_HOST_DEVICE Half operator-(long a, Half b) {
return static_cast<Half>(a) - b;
}
inline AT_HOST_DEVICE Half operator*(long a, Half b) {
return static_cast<Half>(a) * b;
}
inline AT_HOST_DEVICE Half operator/(long a, Half b) {
return static_cast<Half>(a) / b;
}

/// NOTE: we do not define comparisons directly and instead rely on the implicit
/// conversion from at::Half to float.

Expand Down
5 changes: 4 additions & 1 deletion aten/src/ATen/core/LegacyTypeDispatch.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,10 @@ LegacyTypeDispatch & globalLegacyTypeDispatch() {
return singleton;
}

AT_DEFINE_REGISTRY(LegacyTypeInitRegistry, LegacyTypeInitInterface, LegacyTypeInitArgs)
C10_DEFINE_REGISTRY(
LegacyTypeInitRegistry,
LegacyTypeInitInterface,
LegacyTypeInitArgs)

const LegacyTypeInitInterface& getLegacyTypeInit() {
static std::unique_ptr<LegacyTypeInitInterface> legacy_type_init;
Expand Down
8 changes: 6 additions & 2 deletions aten/src/ATen/core/LegacyTypeDispatch.h
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,12 @@ struct CAFFE2_API LegacyTypeInitInterface {
}
};
struct CAFFE2_API LegacyTypeInitArgs {};
AT_DECLARE_REGISTRY(LegacyTypeInitRegistry, LegacyTypeInitInterface, LegacyTypeInitArgs);
#define REGISTER_LEGACY_TYPE_INIT(clsname) AT_REGISTER_CLASS(LegacyTypeInitRegistry, clsname, clsname)
C10_DECLARE_REGISTRY(
LegacyTypeInitRegistry,
LegacyTypeInitInterface,
LegacyTypeInitArgs);
#define REGISTER_LEGACY_TYPE_INIT(clsname) \
C10_REGISTER_CLASS(LegacyTypeInitRegistry, clsname, clsname)

CAFFE2_API const LegacyTypeInitInterface& getLegacyTypeInit();

Expand Down
Loading