forked from pytorch/pytorch
-
Notifications
You must be signed in to change notification settings - Fork 68
Merge from upstream #189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Merge from upstream #189
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ch#11205) Summary: Pull Request resolved: pytorch#11205 Our short term plan for supporting out of tree complex development requires an external library to add a custom subclass of Type without access to the code generation facilities in ATen. This commit reorganizes Type so as to minimize the amount of boilerplate you have to write when making a subclass of Type. In particular, it: - Creates a new CPUTypeDefault/CUDATypeDefault class, which you are intended to inherit from, which provides default implementations of CPU/CUDA that is layout/dtype agnostic. - Adds new getCPUAllocator() and getCUDAAllocator() functions, as a more public API to get your hands on Allocator - Adds allocator() and getDeviceFromPtr(), abstracting the device specific parts of storage() methods; these methods are now implemented in base TypeDefault. - Delete the static typeString() method, which is now dead. - Move is_cuda/is_sparse/is_distributed to TypeDefault. Reviewed By: SsnL Differential Revision: D9631619 fbshipit-source-id: 40b600d99691230e36e03eb56434c351cbc2aa3a
Summary: Pull Request resolved: pytorch#11215 I found these by deleting the implicit conversion of Type to TensorOptions and then fixing sites. This isn't a complete refactor, because I ran out of steam after fixing this many and decided to keep the implicit conversion. Still, why waste a perfectly good refactor? Reviewed By: gchanan, cpuhrsch Differential Revision: D9634750 fbshipit-source-id: 4d8fb778e13e6e24b888b1314a02709b2cb00b62
Summary: keep net type info when generating model complete net. This will keep the performance optimization option Pull Request resolved: pytorch#11032 Reviewed By: wat3rBro Differential Revision: D9564125 Pulled By: harouwu fbshipit-source-id: c6546af9b1d4ff5eddf6124e24a5da1b8baf47df
Summary: Fixes: pytorch#8988 Pull Request resolved: pytorch#10025 Reviewed By: ezyang Differential Revision: D9540967 Pulled By: yf225 fbshipit-source-id: 6ba2a7777162983977db884b693e6f4543b31aeb
Summary: Pull Request resolved: pytorch#11123 this adds an operator that fills a tensor with a uniform(min, max) the implementation is to use the fp32 generator and convert to fp16 if performance becomes an issue we could resort to intrinsics Reviewed By: jspark1105, chocjy Differential Revision: D9598142 fbshipit-source-id: 5aeab99acf7c3596fa6c33611d9d2c484f7c1145
pytorch#11246) Summary: Also, make `torch.isclose` work with integral tensors and refactor `_check_trace` a bit. zdevito Pull Request resolved: pytorch#11246 Differential Revision: D9652701 Pulled By: apaszke fbshipit-source-id: fb0bdbfd1952e45e153541e4d471b423a5659f25
Summary: zdevito Pull Request resolved: pytorch#11224 Differential Revision: D9652703 Pulled By: apaszke fbshipit-source-id: 558e39457e590cad07516e5bb2ecb12789564950
Summary: Pull Request resolved: pytorch#11127 it's invalid to capture `predicate` by reference as it's a local variable. capture it by value instead. Differential Revision: D9600115 fbshipit-source-id: 92e0130d0a74908380b75ade5c3492df49e25941
Summary: Also enables debug for non-MSVC for kernel codegen Pull Request resolved: pytorch#11227 Differential Revision: D9656506 Pulled By: cpuhrsch fbshipit-source-id: 667195cb55de1a1a9042b6b1c4436e9c6c743333
) Summary: This PR adds a hooks interface for registering types for complex scalar types, and a sample implementation of the hook in test_cpp_extensions. The hook registration is patterned off of the existing CUDA hooks. Signed-off-by: Edward Z. Yang <[email protected]> CC The controller you requested could not be found. Pull Request resolved: pytorch#11216 Differential Revision: D9654840 Pulled By: ezyang fbshipit-source-id: 7b97646280d584f8ed6e14ee10a4abcd04cf2987
Summary: Pull Request resolved: pytorch#10717 Differential Revision: D9562888 Pulled By: li-roy fbshipit-source-id: 8f5d62fd0a44aca0a41dc10438e7bb91cc2a972a
Summary: `__repr__` currently fails for distributions with lazy attributes in PyTorch master, throwing a `KeyError`. This fixes the issue. **Additionally:** - Added `logits` to `arg_constraints` for distributions that accept either `probs` or `logits`. This is both to have `__repr__` display the `logits` param when available, and to be able to do validation checks (e.g. NaN checks) when the logit parametrization is used. fritzo, alicanb - I think there were reasons why we had not done so in the first place, but I am unable to recall now. It passes all the tests, but let me know if there is something that I am missing at the moment. - There are certain distributions, e.g. `OneHotCategorical` which won't show any parameters because it uses a `categorical` instance under the hood and neither `logits` / `probs` in `arg_constraints` are present in the instance's `__dict__`. This isn't addressed in this PR. cc. vishwakftw, fritzo, nadavbh12, apaszke Pull Request resolved: pytorch#11263 Differential Revision: D9654959 Pulled By: apaszke fbshipit-source-id: 16f5b20243fe8e2c13e9c528050d4df0b8ea6e45
Summary: Pull Request resolved: pytorch#10874 Fixes the log message "WARNING:data_workers:Warning, data loading lagging behind: name=0" where instead of source name the size of a queue is reported Reviewed By: panshen1, Novitial Differential Revision: D9506606 fbshipit-source-id: 03717cfa9b991afb335ef877378afa3b52fd8f22
Summary: Allows mulitplication of e.g. numpy.float32 with tensors. This came up with pytorch#9468 If you want this and after the other patch is done, I'll add tests (but that would be conflicting, so I prefer to wait). Pull Request resolved: pytorch#9659 Differential Revision: D8948078 Pulled By: weiyangfb fbshipit-source-id: c7dcc57b63e2f100df837f70e1299395692f1a1b
Summary: This PR adds a .travis.yml check for our C++ documentation. The goal is to avoid any documentation/comments in our C++ code that would break the doxygen output and possibly ruin the C++ documentation site (currently https://pytorch.org/cppdocs). For this, we: 1. Run doxygen and record any warnings, 2. Filter out some known bogus warnings, 3. Count the remaining warnings, 4. Fail the check if (3) is non-zero. soumith Pull Request resolved: pytorch#11124 Differential Revision: D9651011 Pulled By: goldsborough fbshipit-source-id: 30f776d23bb6d6c482c54db32828b4b99547e87b
Summary: Fixes pytorch#11057. Pull Request resolved: pytorch#11245 Differential Revision: D9652698 Pulled By: apaszke fbshipit-source-id: 4c5006e32e599c35367aa5acfae45de3ab8ac176
Summary: Pull Request resolved: pytorch#11249 Reviewed By: Ac2zoom Differential Revision: D9652526 Pulled By: houseroad fbshipit-source-id: 12a9038beddd227a2f9e2178edf4e8d623488c3e
Summary: Found these when compiling the new master with gcc 7.3 Pull Request resolved: pytorch#11257 Differential Revision: D9656612 Pulled By: SsnL fbshipit-source-id: 7acb19e13204c010238dab7bc6973cc97b96f9a4
Summary: Deduplicates implementations and reduces sources of failure Pull Request resolved: pytorch#11272 Differential Revision: D9659167 Pulled By: cpuhrsch fbshipit-source-id: 759bfba4fd90795038afe684d9829f5f41f98109
…#10744) Summary: Pull Request resolved: pytorch#10744 As title Reviewed By: jspark1105 Differential Revision: D9436387 fbshipit-source-id: 578b7a6d98843d57e3f8f4c564727e9cadbedd78
Summary: The existing tests had every rank run send to every other rank and only then switch to recv mode. This only works if the send operations are non-blocking and the passed tensors are immediately copied to some kind of send buffer. Instead, every send must be matched with a recv on the other side, because from the API perspective they may block. E.g. imagine a 1GB tensor being sent to every other rank. It can only go through if there is a recv on the other side, or it will deadlock. This change reflects this in the send/recv unit tests. Pull Request resolved: pytorch#11275 Differential Revision: D9658197 Pulled By: pietern fbshipit-source-id: fb6a3fc03b42343a9dfeed0def30d94914e76974
Summary: - In Python 2, use of `/` (regardless of int/float/Tensor) causes a compiler error if `from __future__ import division` is not imported in the file. - The / operator is universally set to do "true" division for integers - Added a `prim::FloorDiv` operator because it is used in loop unrolling. The error if users use '/' in python 2 without importing from __future__ occurs when building the JIT AST. cc apaszke zdevito Pull Request resolved: pytorch#11016 Differential Revision: D9613527 Pulled By: zou3519 fbshipit-source-id: 0cebf44d5b8c92e203167733692ad33c4ec9dac6
Summary: Persistent rnns provide much better performance on V100 with half input data for a variety of cases. Pull Request resolved: pytorch#11248 Differential Revision: D9665687 Pulled By: ezyang fbshipit-source-id: 2bd09a7eb1f5190aadb580977b0ba956e21a7dd5
Summary: Pull Request resolved: pytorch#11131 Reviewed By: xianjiec Differential Revision: D9358415 fbshipit-source-id: 38bf0e597e22d540d9e985ac8da730f80971d745
Summary: Pull Request resolved: pytorch#11254 Previously we use DeviceType in caffe2.proto directly, but it's an `enum` and have implicit conversion to int, which does not have type safety, e.g. we have to explicitly check for a device type is valid in event.h: ``` template <int d> struct EventCreateFunctionRegisterer { explicit EventCreateFunctionRegisterer(EventCreateFunction f) { static_assert(d < MaxDeviceTypes, ""); Event::event_creator_[d] = f; } }; ``` at::DeviceType is an `enum class`, and it does not have implicit conversion to int, and provides better type safety guarantees. In this diff we have done the following refactor(taking CPU as an example): 1. caffe2::DeviceType → caffe2::DeviceTypeProto 2. caffe2::CPU → caffe2::PROTO_CPU 3. caffe2::DeviceType = at::DeviceType 4. caffe2::CPU = at::DeviceType::CPU codemod -d caffe2/caffe2 --extensions h,cc,cpp 'device_type\(\), ' 'device_type(), PROTO_' + some manual changes In short, after this diff, in c++, caffe2::CPU refers to the at::DeviceType::CPU and the old proto caffe2::CPU will be caffe2::PROTO_CPU. In python side, we have a temporary workaround that alias `caffe2_pb2.CPU = caffe2_pb2.PROOT_CPU` to make the change easier to review and this will be removed later. Reviewed By: ezyang Differential Revision: D9545704 fbshipit-source-id: 461a28a4ca74e616d3ee183a607078a717fd38a7
Summary: This PR adds all PyTorch and Caffe2 job configs to CircleCI. Steps for the CircleCI mini-trial: - [ ] Make sure this PR passes Jenkins CI and fbcode internal tests - [x] Approve this PR - [ ] Ask CircleCI to turn up the number of build machines - [ ] Land this PR so that the new `.circleci/config.yml` will take effect Several Caffe2 tests are flaky on CircleCI machines and hence skipped when running on CircleCI. A proper fix for them will be worked on after a successful mini-trial. Pull Request resolved: pytorch#11264 Differential Revision: D9656793 Pulled By: yf225 fbshipit-source-id: 7832e90018f3dff7651489c04a179d6742168fe1
Summary: I'm setting up an automatic sync job for cppdocs and need two fixes to the cpp docs config: 1. Right now the cppdocs use the `torch` package to figure out the version. For C++ docs all I really need from the built package are the generated Tensor.h and Functions.h files. I can actually generate those directly via `aten/src/ATen/gen.py`, so I can skip building PyTorch altogether and save 10 minutes in the sync job! For this I need to avoid using the torch package in the docs. 2. Internal proxy issues prevent using the git link for sphinx_rtd_theme. We can just use the pip package for the cppdocs (not for the normal PyTorch docs) soumith ezyang Pull Request resolved: pytorch#11300 Differential Revision: D9667193 Pulled By: goldsborough fbshipit-source-id: 5567e0b3d3bdce03f5856babdb4ff76bcee91846
Summary: Pull Request resolved: pytorch#11256 - in deleteNode method, remove optional deleteEdge flag as it's not used - in deleteEdge method, remove optional removeRef flag as it's not used - in replaceNode method, remove optional newHead_ parameter as it's not used - also simplifying the implementation by just calling replaceInEdges and replaceOutEdges - remove importNode & importEdge as they're not in used - add getEdgeIfExists that is like getEdge() but returns nullptr instead of throwing when the edge does not exist - reduce verbosity in the basic graph unit test and add more test cases for ReplaceEdges Differential Revision: D9650913 fbshipit-source-id: 6c18b37bef0d2abe1b57fb4fc47bfdbcee387694
Summary: Needed for FULL_CAFFE2=1 with statically linked CUDA libraries. Waiting on advice from Nvidia Pull Request resolved: pytorch#10911 Reviewed By: pjh5 Differential Revision: D9636256 Pulled By: orionr fbshipit-source-id: fcad7945910b6c8fb5f52e81cc87dad5fcfb3c65
Summary: Pull Request resolved: pytorch#11291 In S163230, we've found CuDNN 7 upgrade causes accuracy drop in training convolution network such as ResNeXt-101 (~0% accuracy), and video R(2+1)D (65 --> 63%). Our current theory for this accuracy loss is because of the new "CUDNN_BATCHNORM_SPATIAL_PERSISTENT" in spatialBN operator. In Caffe 2, we've made this mode as default. According to CuDNN manual (https://fburl.com/z996mr13), this mode may introduce some limitation in the input data range and cause overflow (which outputs NaN). NaN is probably not the case, because we're seeing a few percent of accuracy drop but not gradient explosion or failure. However, this "performance-optimized" code path may introduce accuracy loss (which is not caught by our unit test case because the input data range is [-0.5-0.5]. Reviewed By: kuttas, stephenyan1231 Differential Revision: D9601217 fbshipit-source-id: 73c2690c19cb1f02ea4e5e2200f50128df4f377b
Summary: Turns out that '' net.type is not acceptable to CreateNet. But empty net.type is acceptable. Fix that in this diff. Also this is related to T33613083 Pull Request resolved: pytorch#11286 Reviewed By: Maratyszcza, wat3rBro Differential Revision: D9659920 Pulled By: harouwu fbshipit-source-id: d68f24b754e18e1121f029656d885c48ab101946
Summary: Not a lot changed Pull Request resolved: pytorch#11332 Differential Revision: D9683680 Pulled By: zou3519 fbshipit-source-id: 95f444e54049dd268fc10effe425ef2df79c6467
Summary: Pull Request resolved: pytorch#11028 Reviewed By: salexspb Differential Revision: D7715107 Pulled By: costin-eseanu fbshipit-source-id: a4f73d53c0192b9826451b4bba4ab0992abbb1a2
…ytorch#11313) Summary: 1. Add documentation to Linear and improve documentation for RNNs 2. Fix preprocessing in C++ docs by adding correct include path 3. Make myself and ebetica codeowner of docs/cpp to improve development speed ebetica ezyang soumith Pull Request resolved: pytorch#11313 Differential Revision: D9683615 Pulled By: goldsborough fbshipit-source-id: 84ea32f9ea6b4060744aabbf5db368776a30f0b5
Summary: Pull Request resolved: pytorch#11315 Rename unit tests file to make it consistent with fb cpp style guideline "The unittest for MyFoo.cpp should be named MyFooTest.cpp." Reviewed By: yinghai Differential Revision: D9671519 fbshipit-source-id: 44ed6794f6e479d190916db8064eee692e3ad876
Summary: This lets you compile builtin functions from C++ without having a dependence on Python ```cpp auto module = torch::jit::compile(JIT"( def my_script_method(x, y): return torch.relu(x) + y )"); IValue result = module->run_method("my_script_method", 1, 2); ``` goldsborough zdevito apaszke Pull Request resolved: pytorch#10847 Differential Revision: D9543461 Pulled By: driazati fbshipit-source-id: 6160dae094030ca144a0df93cb9f26aa78c8cf27
Summary: This will allow users to set customized timeout option for the store. Tested by my own debug print to make sure that C++ actually used the timeout Pull Request resolved: pytorch#11265 Differential Revision: D9666164 Pulled By: teng-li fbshipit-source-id: 4eb6441783da106a3fd59b95457e503e83e4640f
Summary: Pull Request resolved: pytorch#11338 The `min_` and `max_` value of the filler is in `double` format but when we are filling a specific type of tensor, their value can exceed the type limits, resulting in crash. This diff checks the type limits first and if `min_`/`max_` is out of the limits, it will clip it. Reviewed By: highker Differential Revision: D9684455 fbshipit-source-id: 6da98a03c57f3296abaddc7c5cfc1c836c611eb0
Summary: Pull Request resolved: pytorch#11098 Added a test for testing CPU version across multiple devices. Reviewed By: enosair, BIT-silence Differential Revision: D9584520 fbshipit-source-id: 0d8c85e6d402bc7b34d5f8f16ef655ff9b61b49e
Summary: We shouldn't use system Eigen in any cases when building with setup.py. If people want to use system Eigen (not from third_party) they can build with CMake for now. Pull Request resolved: pytorch#11334 Reviewed By: pjh5 Differential Revision: D9689450 Pulled By: orionr fbshipit-source-id: baf616b9f195692942151ad201611dcfe7d927ba
Summary: This is an experimental build on top of what orionr and mingzhe09088 built. Essentially, the idea is that we will need separate *_API versions for different shared libraries. If this theory is right, I'll try to clean up the design a bit and document it properly. Pull Request resolved: pytorch#11266 Reviewed By: orionr Differential Revision: D9682942 Pulled By: Yangqing fbshipit-source-id: c79653199e67a1500c9174f39f8b0357324763f3
Summary: Pull Request resolved: pytorch#11190 As discussed with Alexander Sidorov, params_bytes refer to the number of bytes we're reading for parameters, not the size of parameters. They only differ in sparse operators. Reviewed By: mdschatz Differential Revision: D9628635 fbshipit-source-id: 9e2aed0cf59388928dc69b8534cf254f0347c9c8
Summary: In pytorch#9466 I got rid of storage views and eliminated all places where they were used... OR SO I THOUGHT. In actuality, under certain conditions (specifically, if you trained a CUDA multiprocessing model shared over CUDA IPC and then serialized your parameters), you could also serialize storage slices to the saved model format. In pytorch#9466, I "fixed" the case when you loaded the legacy model format (really, just unshared the storages--not strictly kosher but if you aren't updating the parameters, shouldn't matter), but NOT the modern model format, so such models would fail. So, I could have applied the legacy model format fix too, but hyperfraise remarked that he had applied a fix that was effectively the same as unsharing the storages, but it had caused his model to behave differently. So I looked into it again, and realized that using a custom deleter, I could simulate the same behavior as old storage slices. So back they come. In principle, I could also reimplement storage views entirely using our allocators, but I'm not going to do that unless someone really really wants it. Fixes pytorch#10120. Signed-off-by: Edward Z. Yang <[email protected]> Pull Request resolved: pytorch#11314 Reviewed By: ailzhang Differential Revision: D9671966 Pulled By: ezyang fbshipit-source-id: fd863783d03b6a6421d6b9ae21ce2f0e44a0dcce
…torch#11343) Summary: Pull Request resolved: pytorch#11343 make the generated classes (OpClasses.h...) consistent with fb cpp code style Reviewed By: yinghai Differential Revision: D9689487 fbshipit-source-id: 450e742d2462115d1bf41b9ea88d20df0a842b2b
Summary: On the way to pytorch#10774 This PR adds advanced indexing with tensors. The approach is to desugar advanced indexing into an at::index op. This is exactly how normal pytorch does it. [(I used this code as reference)](https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/python_variable_indexing.cpp) Supporting sequences is a little tricky because JIT script doesn't have an easy way to turn arbitrary n-dimensional python lists into a tensor (it would be easy if we supported `torch.tensor`), so that'll come in a future PR. cc jamesr66a zdevito Pull Request resolved: pytorch#10862 Differential Revision: D9659449 Pulled By: zou3519 fbshipit-source-id: 56d293720d44c0fd27909e18327ab3985ddfced6
…8ffb52 (pytorch#11346) Summary: Pull Request resolved: pytorch#11346 Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c Included changes: - **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (ROCm#1377) <James Reed> - **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (ROCm#1246) <Akshay Chalana> - **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (ROCm#1374) <Lu Fang> - **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix ROCm#1344 (ROCm#1371) <Marat Dukhan> - **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (ROCm#1317) <Ryan Hill> - **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (ROCm#1368) <Lu Fang> Reviewed By: jamesr66a Differential Revision: D9691533 fbshipit-source-id: 6aff6ce04ade37182e2ffe9bc83eb86846bc722d
Summary: Pull Request resolved: pytorch#10776 as title Reviewed By: chocjy Differential Revision: D9458099 fbshipit-source-id: f840d4f1542e8180f41cc0732c8468fa43805ab8
…#11318) Summary: Fixed a few bugs that were not tested in the c10d frontend APIs, including get_rank, get_world_size, and destroy_process_group of a given group. These APIs are added to the CI tests. Also added all the group related tests, including full-group, and partial groups (existing ones), since both will hit different code paths. Also removed experimental APIs for c10d initially used in DDP, now we don't use it anyway. Pull Request resolved: pytorch#11318 Reviewed By: pietern Differential Revision: D9675896 Pulled By: teng-li fbshipit-source-id: a2eac2c57933effa2d139855f786e64919a95bfc
…ch#10888) Summary: Pull Request resolved: pytorch#10888 Add cuda version of SpatialBNOp also optimize SpatialBN on CPU Reviewed By: houseroad Differential Revision: D9512435 fbshipit-source-id: 6f828c88d56d30dc9a2f98a297a161c35cc511b1
Summary: * purge hcSPARSE now that rocSPARSE is available * integrate a custom hcc and HIP * hcc brings two important compiler fixes (fixes hundreds of unit tests) * HIP brings a smart dispatcher that allows us to avoid a lot of static_casts (we haven't yet removed the automatic static_casts but this catches some occurrences the script did not catch) * mark 5 unit tests skipping that have regressed w/ the new hcc (we don't know yet what is at fault) * optimize bitonic sort - the comparator is always an empty struct - therefore passing it by value saves at least 3 bytes. It also removes an ambiguity around passing references to `__global__` functions Pull Request resolved: pytorch#11198 Differential Revision: D9652340 Pulled By: ezyang fbshipit-source-id: f5af1d891189da820e3d13b7bed91a7a43154690
Summary: We need to remove nomnigraph from the list of public libraries in order to support libtorch extensions. Easiest way to do this is to include it into the Caffe2 source like all other caffe2/core/ code. However, because the headers are in a different place, we need to include them for linked libraries (pybind, tests, etc). On an upside, this means that nomnigraph is now default hidden visibility too. FYI peterjc123 xkszltl goldsborough bwasti Yangqing Pull Request resolved: pytorch#11303 Reviewed By: pjh5 Differential Revision: D9694932 Pulled By: orionr fbshipit-source-id: 5db3eb20bc5ddc873ce9151236b74663fbb33ed8
lcskrishna
pushed a commit
to lcskrishna/pytorch
that referenced
this pull request
May 15, 2023
When tensor is resized, reference array to it's sizes may become invalid. Make a copy in advance. <details> <summary>ASAN report</summary> ``` ================================================================= ==1115867==ERROR: AddressSanitizer: heap-use-after-free on address 0x61000013d790 at pc 0x03ff8e7da360 bp 0x03fff53c83a0 sp 0x03fff53c8390 READ of size 8 at 0x61000013d790 thread T0 #0 0x3ff8e7da35f in c10::SymInt::is_heap_allocated() const /home/user/pytorch/c10/core/SymInt.h:154 ROCm#1 0x3ff8e7da35f in c10::SymInt::maybe_as_int() const /home/user/pytorch/c10/core/SymInt.h:215 ROCm#2 0x3ff8e7d0a6d in c10::SymInt::sym_eq(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.cpp:69 ROCm#3 0x3ff7a9ab0bd in c10::SymInt::operator==(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.h:177 ROCm#4 0x3ff7a9aaedd in bool std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++- v11/bits/stl_algobase.h:1162 ROCm#5 0x3ff7a9aae4b in bool std::__equal_aux1<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/ stl_algobase.h:1211 ROCm#6 0x3ff7a9aae05 in bool std::__equal_aux<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/s tl_algobase.h:1219 ROCm#7 0x3ff7a9aad97 in bool std::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_alg obase.h:1556 ROCm#8 0x3ff4b23c771 in c10::ArrayRef<c10::SymInt>::equals(c10::ArrayRef<c10::SymInt>) const /home/user/pytorch/c10/util/ArrayRef.h:188 ROCm#9 0x3ff4cb91bc1 in bool c10::operator!=<c10::SymInt>(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>) /home/user/pytorch/c10/util/ArrayRef.h:341 ROCm#10 0x3ff6d1b57ff in torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/torch/csrc/autograd/Variab leTypeManual.cpp:408 ROCm#11 0x3ff6d1e59c7 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1 0::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 ROCm#12 0x3ff6d1e59c7 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10: :ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::Sy mInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::Disp atchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480 ROCm#13 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 ROCm#14 0x3ff51ca6e8f in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90 ROCm#15 0x3ff51ca6e8f in at::Tensor const& c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Ten sor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656 ROCm#16 0x3ff5182006b in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c 10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492 ROCm#17 0x3ff5182006b in at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2144 ROCm#18 0x3ff6d1d5e07 in at::redispatch::resize__symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/RedispatchFunctions.h:2847 ROCm#19 0x3ff6d1bbb67 in torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pyto rch/torch/csrc/autograd/VariableTypeManual.cpp:243 ROCm#20 0x3ff6d1bd197 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1 0::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10 ::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFu nctionIntoFunctor.h:13 ROCm#21 0x3ff6d1bd197 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10: :ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c 10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor .h:480 ROCm#22 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 ROCm#23 0x3ff5181ead1 in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90 ROCm#24 0x3ff5181ead1 in at::Tensor const& c10::Dispatcher::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor co nst& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/at en/src/ATen/core/dispatch/Dispatcher.h:639 ROCm#25 0x3ff5181ead1 in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487 ROCm#26 0x3ff5181ead1 in at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2137 ROCm#27 0x3ff79b44fcf in at::Tensor::resize__symint(c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const aten/src/ATen/core/TensorBody.h:2452 ROCm#28 0x3ff79a802db in torch::autograd::THPVariable_resize_(_object*, _object*, _object*)::$_0::operator()(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/us er/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13417 ROCm#29 0x3ff7999f1eb in torch::autograd::THPVariable_resize_(_object*, _object*, _object*) /home/user/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13419 ROCm#30 0x3ffa2c9b009 in method_vectorcall_VARARGS_KEYWORDS Objects/descrobject.c:344 ROCm#31 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#32 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#33 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#34 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#35 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#36 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#37 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#38 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#39 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#40 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#41 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#42 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#43 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#44 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#45 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#46 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#47 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#48 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#49 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#50 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#51 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#52 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#53 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#54 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#55 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#56 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#57 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#58 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#59 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#60 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#61 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#62 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#63 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#64 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#65 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#66 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#67 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#68 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#69 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#70 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#71 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#72 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#73 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#74 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#75 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#76 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#77 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#78 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#79 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#80 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#81 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#82 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#83 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#84 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#85 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#86 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#87 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#88 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#89 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#90 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#91 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#92 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#93 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#94 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#95 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#96 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#97 0x3ffa2c8ab9b in PyVectorcall_Call Objects/call.c:267 ROCm#98 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#99 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#100 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#101 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#102 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#103 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#104 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#105 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#106 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#107 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#108 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#109 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#110 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#111 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#112 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#113 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#114 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#115 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#116 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#117 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#118 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#119 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#120 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#121 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#122 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#123 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#124 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#125 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#126 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#127 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#128 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#129 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#130 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#131 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#132 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#133 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#134 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#135 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#136 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#137 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#138 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#139 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#140 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#141 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#142 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#143 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#144 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#145 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#146 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#147 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#148 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#149 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#150 0x3ffa2c8ad17 in _PyObject_Call Objects/call.c:305 ROCm#151 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#152 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#153 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#154 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#155 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#156 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#157 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#158 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#159 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#160 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#161 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#162 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#163 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#164 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#165 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#166 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#167 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#168 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#169 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#170 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#171 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#172 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#173 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#174 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#175 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#176 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#177 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#178 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#179 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#180 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#181 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#182 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#183 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#184 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#185 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#186 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#187 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#188 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#189 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#190 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#191 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#192 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#193 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#194 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#195 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#196 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#197 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#198 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#199 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#200 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#201 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#202 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#203 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#204 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#205 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#206 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#207 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#208 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#209 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#210 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#211 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#212 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#213 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#214 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#215 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#216 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#217 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#218 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#219 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#220 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#221 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#222 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#223 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#224 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#225 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#226 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#227 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#228 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#229 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#230 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#231 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290 ROCm#232 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317 ROCm#233 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943 ROCm#234 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#235 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#236 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#237 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#238 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#239 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#240 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#241 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#242 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#243 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#244 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#245 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#246 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53 ROCm#247 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#248 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#249 0x3ffa2e05447 in call_function Python/ceval.c:5891 ROCm#250 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#251 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#252 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#253 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#254 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#255 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#256 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#257 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215 0x61000013d790 is located 80 bytes inside of 192-byte region [0x61000013d740,0x61000013d800) freed by thread T0 here: #0 0x3ffa3237de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 ROCm#1 0x3ff8e7e3221 in c10::TensorImpl::~TensorImpl() /home/user/pytorch/c10/core/TensorImpl.cpp:75 previously allocated by thread T0 here: #0 0x3ffa323734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99 ROCm#1 0x3ff4aeeb3d1 in c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_null_type<c10::TensorImpl> > c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_nul l_type<c10::TensorImpl> >::make<c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >, c10::DispatchKeySet&, caffe2::TypeMeta&>(c10::intrusive_ptr<c10::S torageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >&&, c10::DispatchKeySet&, caffe2::TypeMeta&) /home/user/pytorch/c10/util/intrusive_ptr.h:498 ROCm#2 0x3ff76f79e17 (/home/user/pytorch/build/lib.linux-s390x-cpython-310/torch/lib/libtorch_cpu.so+0x2fb79e17) SUMMARY: AddressSanitizer: heap-use-after-free /home/user/pytorch/c10/core/SymInt.h:154 in c10::SymInt::is_heap_allocated() const Shadow bytes around the buggy address: 0x100c2000027aa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x100c2000027ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x100c2000027ac0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x100c2000027ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x100c2000027ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd =>0x100c2000027af0: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd 0x100c2000027b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x100c2000027b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x100c2000027b20: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x100c2000027b30: 00 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa 0x100c2000027b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==1115867==ABORTING ``` </details> <details> <summary>Additional backtraces (not full)</summary> Memory deallocation: ``` #0 operator delete (ptr=0x61000013d740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 ROCm#1 0x000003ffa77e3222 in c10::TensorImpl::~TensorImpl (this=0x61000013d740) at /home/user/pytorch/c10/core/TensorImpl.cpp:75 ROCm#2 0x000003ff63e76e8c in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:291 ROCm#3 0x000003ff63e76910 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:370 ROCm#4 0x000003ff63e67240 in at::TensorBase::~TensorBase (this=0x3ffd7ec8230) at /home/user/pytorch/aten/src/ATen/core/TensorBase.h:80 ROCm#5 0x000003ff63e85ee0 in at::Tensor::~Tensor (this=0x3ffd7ec8230) at aten/src/ATen/core/TensorBody.h:90 ROCm#6 0x000003ff63f67304 in resize__functionalization (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:173 ROCm#7 0x000003ff63f89258 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) ( this=0x6030000390a0, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 ROCm#8 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (functor=0x6030000390a0, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480 ROCm#9 0x000003ff6aca560a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > ( unboxed_kernel_func=0x3ff63f88a80 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso r const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>, functor=0x6030000390a0, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 ROCm#10 0x000003ff6aca715c in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1b28, opHandle=..., dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:96 ROCm#11 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff919400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656 ROCm#12 0x000003ff6a82006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff919a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492 ROCm#13 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144 ROCm#14 0x000003ff861d5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847 ROCm#15 0x000003ff861b579e in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:401 ``` Memory access: ``` #0 c10::SymInt::maybe_as_int (this=0x61000013d790) at /home/user/pytorch/c10/core/SymInt.h:215 ROCm#1 0x000003ff734d0a6e in c10::SymInt::sym_eq (this=0x61000013d790, sci=...) at /home/user/pytorch/c10/core/SymInt.cpp:69 ROCm#2 0x000003ff5f6ab0be in c10::SymInt::operator== (this=0x61000013d790, o=...) at /home/user/pytorch/c10/core/SymInt.h:177 ROCm#3 0x000003ff5f6aaede in std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1162 ROCm#4 0x000003ff5f6aae4c in std::__equal_aux1<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1211 ROCm#5 0x000003ff5f6aae06 in std::__equal_aux<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1219 ROCm#6 0x000003ff5f6aad98 in std::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1556 ROCm#7 0x000003ff2ff3c772 in c10::ArrayRef<c10::SymInt>::equals (this=0x3ffed7c9900, RHS=...) at /home/user/pytorch/c10/util/ArrayRef.h:188 ROCm#8 0x000003ff31891bc2 in c10::operator!=<c10::SymInt> (a1=..., a2=...) at /home/user/pytorch/c10/util/ArrayRef.h:341 ROCm#9 0x000003ff51eb5800 in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:408 ROCm#10 0x000003ff51ee59c8 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c 10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (this=0x6030007dca40, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 ROCm#11 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt >, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional< c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tenso r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480 ROCm#12 0x000003ff369a512a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > ( unboxed_kernel_func=0x3ff51ee51f0 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::Ar rayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKern el*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>, functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50 ROCm#13 0x000003ff369a6e90 in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1bc8, opHandle=..., dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90 ROCm#14 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::Arr ayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff5d6400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656 ROCm#15 0x000003ff3652006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const ( this=0x3ff5d6a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492 ROCm#16 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144 ROCm#17 0x000003ff51ed5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847 ROCm#18 0x000003ff51ebbb68 in torch::autograd::VariableType::(anonymous namespace)::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:243 ``` </details> Pull Request resolved: pytorch#101064 Approved by: https://github.com/Skylion007, https://github.com/albanD
alugorey
pushed a commit
to alugorey/pytorch
that referenced
this pull request
May 17, 2023
arguments() returns vector member of object returned by schema() call. When object returned by schema() call is destroyed, the vector is deallocated as well, it's lifetime isn't extended. This issue detected while running `pytest -v test/mobile/test_lite_script_type.py -k test_nest_typing_namedtuple_custom_classtype` with ASAN. <details> <summary>ASAN output</summary> ``` ==1134126==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0005a5790 at pc 0x03ff844488d8 bp 0x03fff584afe8 sp 0x03fff584afd8 READ of size 8 at 0x60d0005a5790 thread T0 #0 0x3ff844488d7 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) /usr/lib/gcc/s390x-i bm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 #1 0x3ff8444293f in std::vector<c10::Argument, std::allocator<c10::Argument> >::begin() const /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_vector.h:821 #2 0x3ff84d807d1 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:617 ROCm#3 0x3ff84d80305 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604 ROCm#4 0x3ff84856871 in pybind11::detail::type_caster<c10::IValue, void>::cast(c10::IValue, pybind11::return_value_policy, pybind11::handle) /home/user/pytorch/torch/csrc/jit/python/pybind.h:138 ROCm#5 0x3ff85318191 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is _method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const /home/user/pytorch/cmake/../third_party/pybin d11/include/pybind11/pybind11.h:249 ROCm#6 0x3ff85317cfd in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is _method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) /home/user/pytorch/cmake/../third_party/pybind11/incl ude/pybind11/pybind11.h:224 ROCm#7 0x3ff82ee52e9 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929 ROCm#8 0x3ffab002903 in cfunction_call Objects/methodobject.c:543 ROCm#9 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#10 0x3ffaaf8e919 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#11 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#12 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#13 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#14 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#15 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#16 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#17 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#18 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#19 0x3ffaaf8a615 in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#20 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#21 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#22 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#23 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#24 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#25 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#26 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#27 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#28 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#29 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#30 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#31 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#32 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#33 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#34 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#35 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#36 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#37 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#38 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#39 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#40 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#41 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#42 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#43 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#44 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#45 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#46 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#47 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#48 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#49 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#50 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#51 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#52 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#53 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#54 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#55 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#56 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#57 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#58 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#59 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#60 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#61 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#62 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#63 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#64 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#65 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#66 0x3ffaaf8ab9b in PyVectorcall_Call Objects/call.c:267 ROCm#67 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#68 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#69 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#70 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#71 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#72 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#73 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#74 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#75 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#76 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#77 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#78 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#79 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#80 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#81 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#82 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#83 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#84 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#85 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#86 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#87 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#88 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#89 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#90 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#91 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#92 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#93 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#94 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#95 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#96 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#97 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#98 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#99 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#100 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#101 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#102 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#103 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#104 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#105 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#106 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#107 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#108 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#109 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#110 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#111 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#112 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#113 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#114 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#115 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#116 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#117 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#118 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#119 0x3ffaaf8ad17 in _PyObject_Call Objects/call.c:305 ROCm#120 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#121 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#122 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#123 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#124 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#125 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#126 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#127 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#128 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#129 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#130 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#131 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#132 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#133 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#134 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#135 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#136 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#137 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#138 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#139 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#140 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#141 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#142 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#143 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#144 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#145 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#146 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#147 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#148 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#149 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#150 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#151 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#152 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#153 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#154 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#155 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#156 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#157 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#158 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#159 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#160 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#161 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#162 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#163 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#164 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#165 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#166 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#167 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#168 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#169 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#170 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#171 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#172 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#173 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#174 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#175 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#176 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#177 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#178 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#179 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#180 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#181 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#182 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#183 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#184 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#185 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#186 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#187 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#188 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#189 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#190 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#191 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#192 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#193 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#194 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#195 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#196 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#197 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#198 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#199 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#200 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 ROCm#201 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317 ROCm#202 0x3ffab1059c7 in do_call_core Python/ceval.c:5943 ROCm#203 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#204 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#205 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#206 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#207 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#208 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#209 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#210 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#211 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#212 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#213 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#214 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#215 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53 ROCm#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#217 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#218 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#219 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#220 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#221 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#222 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#223 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153 ROCm#224 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#225 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494 ROCm#226 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#227 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#228 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#229 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#230 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231 ROCm#231 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#232 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#233 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#234 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#235 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#236 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#237 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#238 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#239 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#240 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#241 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#242 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#243 0x3ffab105447 in call_function Python/ceval.c:5891 ROCm#244 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#245 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#246 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065 ROCm#247 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342 ROCm#248 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255 ROCm#249 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290 0x60d0005a5790 is located 80 bytes inside of 136-byte region [0x60d0005a5740,0x60d0005a57c8) freed by thread T0 here: #0 0x3ffab537de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 #1 0x3ff55984fdb in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate(std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>*, unsigned long) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145 previously allocated by thread T0 here: #0 0x3ffab53734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99 #1 0x3ff5598443f in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127 #2 0x3fff5849ecf ([stack]+0xb2ecf) SUMMARY: AddressSanitizer: heap-use-after-free /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) Shadow bytes around the buggy address: 0x100c1a000b4aa0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa 0x100c1a000b4ab0: fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd 0x100c1a000b4ac0: fd fd fd fd fd fa fa fa fa fa fa fa fa fa fd fd 0x100c1a000b4ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa 0x100c1a000b4ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd =>0x100c1a000b4af0: fd fd[fd]fd fd fd fd fd fd fa fa fa fa fa fa fa 0x100c1a000b4b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x100c1a000b4b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x100c1a000b4b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x100c1a000b4b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x100c1a000b4b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==1134126==ABORTING ``` Additional backtraces (not full): Allocation: ``` #0 __memset_z196 () at ../sysdeps/s390/memset-z900.S:144 #1 0x000003ff96f3072a in __asan::Allocator::Allocate (this=this@entry=0x3ff97041eb8 <__asan::instance>, size=size@entry=136, alignment=8, alignment@entry=0, stack=<optimized out>, stack@entry=0x3ffdbb45d78, alloc_type=<optimized out>, can_fill=true) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:599 #2 0x000003ff96f2c088 in __asan::asan_memalign (alignment=alignment@entry=0, size=size@entry=136, stack=stack@entry=0x3ffdbb45d78, alloc_type=alloc_type@entry=__asan::FROM_NEW) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:1039 ROCm#3 0x000003ff96fb73b0 in operator new (size=136) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99 ROCm#4 0x000003ff41404440 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate (this=0x3ffdbb468c0, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127 ROCm#5 0x000003ff414042a0 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::allocate (__a=..., __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:464 ROCm#6 0x000003ff41403b66 in std::__allocate_guarded<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > > (__a=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:98 ROCm#7 0x000003ff4140372a in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47888, __p=@0x3ffdbb47880: 0x0, __a=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:648 ROCm#8 0x000003ff41403328 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1342 ROCm#9 0x000003ff41402f06 in std::shared_ptr<c10::FunctionSchema>::shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > ( this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:409 ROCm#10 0x000003ff41402b6e in std::allocate_shared<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__a=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:862 ROCm#11 0x000003ff4140215c in std::make_shared<c10::FunctionSchema, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:878 ROCm#12 0x000003ff413d180c in c10::TupleType::createWithSpec<c10::basic_string_view<char> > (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...}, field_defaults=std::vector of length 0, capacity 0) at /home/user/pytorch/aten/src/ATen/core/type.cpp:769 ROCm#13 0x000003ff413b9ca6 in c10::TupleType::createNamed (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...}) at /home/user/pytorch/aten/src/ATen/core/type.cpp:725 ROCm#14 0x000003ff4115fbac in c10::ivalue::TupleTypeFactory<c10::TupleType>::fallback (type=...) at /home/user/pytorch/aten/src/ATen/core/dynamic_type.cpp:383 ROCm#15 0x000003ff708217fe in c10::ivalue::Tuple::type<c10::TupleType> (this=0x6080004b8520) at /home/user/pytorch/aten/src/ATen/core/ivalue_inl.h:781 ROCm#16 0x000003ff70800740 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613 ROCm#17 0x000003ff70800306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604 ROCm#18 0x000003ff702d6872 in pybind11::detail::type_caster<c10::IValue, void>::cast (src=...) at /home/user/pytorch/torch/csrc/jit/python/pybind.h:138 ROCm#19 0x000003ff70d98192 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const (this=0x3ffdbb4ca20, call=...) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:249 ROCm#20 0x000003ff70d97cfe in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) (call=...) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:224 ROCm#21 0x000003ff6e9652ea in pybind11::cpp_function::dispatcher (self=<PyCapsule at remote 0x3ff83e27720>, args_in=(<torch._C.LiteScriptModule at remote 0x3ff811844b0>, (<Tensor at remote 0x3ff814efb00>,)), kwargs_in=0x0) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929 ``` Deallocation: ``` #0 operator delete (ptr=0x60d0005a5740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160 #1 0x000003ff44904fdc in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate (this=0x3ffc5dc8020, __p=0x60d0005a5740, __t=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145 #2 0x000003ff44904fa8 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::deallocate ( __a=..., __p=0x60d0005a5740, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:496 ROCm#3 0x000003ff449041f2 in std::__allocated_ptr<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::~__allocated_ptr ( this=0x3ffc5dc8030) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:74 ROCm#4 0x000003ff44904888 in std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>::_M_destroy (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:538 ROCm#5 0x000003ff43895a62 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:184 ROCm#6 0x000003ff43895420 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x611000c40648) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705 ROCm#7 0x000003ff4466e7f4 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154 ROCm#8 0x000003ff4466d820 in std::shared_ptr<c10::FunctionSchema>::~shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122 ROCm#9 0x000003ff448d82f6 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142 ROCm#10 0x000003ff448d8346 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142 ROCm#11 0x000003ff731296a4 in std::_Sp_counted_ptr<c10::TupleType*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:348 ROCm#12 0x000003ff71eaf666 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:168 ROCm#13 0x000003ff71eaf330 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x3ffc5dc9368) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705 ROCm#14 0x000003ff73129ee4 in std::__shared_ptr<c10::TupleType, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154 ROCm#15 0x000003ff73122390 in std::shared_ptr<c10::TupleType>::~shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122 ROCm#16 0x000003ff73d00788 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613 ROCm#17 0x000003ff73d00306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604 ``` </details> Pull Request resolved: pytorch#101400 Approved by: https://github.com/zou3519
lcskrishna
pushed a commit
to lcskrishna/pytorch
that referenced
this pull request
May 29, 2023
3 disabled functions are attempting out of bounds reads. Disable them until sleef library is fixed. <details> <summary>ASAN report</summary> ``` ================================================================= ==2030580==ERROR: AddressSanitizer: global-buffer-overflow on address 0x03ff70f54570 at pc 0x03ff6704e960 bp 0x03ffce128940 sp 0x03ffce128930 READ of size 4 at 0x03ff70f54570 thread T0 #0 0x3ff6704e95f in vgather_vf_p_vi2 /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 ROCm#1 0x3ff6704e95f in rempif /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:550 ROCm#2 0x3ff6704e95f in Sleef_cosf4_u10vxe2 /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:1021 ROCm#3 0x3ff67029cfb in Sleef_cosf4_u10 /home/user/pytorch/build/sleef/src/libm/disps390x_128.c:182 ROCm#4 0x3ff55d21941 in at::vec::ZVECTOR::Vectorized<float, void> at::vec::ZVECTOR::Vectorized<float, void>::mapSleef<float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __ vector(2)), float, 0>(float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __vector(2))) const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:991 ROCm#5 0x3ff5689ad01 in at::vec::ZVECTOR::Vectorized<float, void>::cos() const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:1074 ROCm#6 0x3ff5685df97 in at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1}::operator()(at::vec::ZVECTOR::Vectorized<float, void>) const /home/ user/pytorch/aten/src/ATen/cpu/vml.h:71 ROCm#7 0x3ff5689b691 in void at::vec::map<float, at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1}, 0>(at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1} const&, float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vec/functional_base.h:239 ROCm#8 0x3ff5685e0df in void at::vml::ZVECTOR::vcos<float>(float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vml.h:71 ROCm#9 0x3ff563fdde3 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 ROCm#10 0x3ff5648e4a3 in operator() /home/user/pytorch/aten/src/ATen/TensorIterator.h:406 ROCm#11 0x3ff5663cae1 in callback_fn<at::TensorIteratorBase::loop_2d_from_1d<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> >(c onst at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)>&)::<lambda(char**, const int64_t*, int64_t, int64_t)> > /home/user/pytorch/ c10/util/FunctionRef.h:43 ROCm#12 0x3ff4d45a933 in c10::function_ref<void (char**, long const*, long, long)>::operator()(char**, long const*, long, long) const /home/user/pytorch/c10/util/FunctionRef.h:64 ROCm#13 0x3ff4d455133 in at::internal::serial_for_each(c10::ArrayRef<long>, c10::ArrayRef<long>, char**, unsigned long, c10::function_ref<void (char**, long const*, long, long)>, at::Range) /home/user/pyt orch/aten/src/ATen/TensorIteratorInternal.h:52 ROCm#14 0x3ff4d43b703 in at::TensorIteratorBase::serial_for_each(c10::function_ref<void (char**, long const*, long, long)>, at::Range) const /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:777 ROCm#15 0x3ff4d43ab59 in at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long) /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:749 ROCm#16 0x3ff5648e851 in for_each<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> > /home/user/pytorch/aten/src/ATen/TensorItera tor.h:421 ROCm#17 0x3ff563fe5f9 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 ROCm#18 0x3ff56400915 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 ROCm#19 0x3ff56400f1d in at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&) /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770 ROCm#20 0x3ff4f303007 in void at::native::DispatchStub<void (*)(at::TensorIteratorBase&), at::native::cos_stub>::operator()<at::native::structured_cos_out&>(c10::DeviceType, at::native::structured_cos_out &) /home/user/pytorch/aten/src/ATen/native/DispatchStub.h:158 ROCm#21 0x3ff4f2edb3f in at::native::structured_cos_out::impl(at::Tensor const&, at::Tensor const&) /home/user/pytorch/aten/src/ATen/native/UnaryOps.cpp:330 ROCm#22 0x3ff526ef739 in wrapper_CPU_cos /home/user/pytorch/build/aten/src/ATen/RegisterCPU.cpp:4307 ROCm#23 0x3ff52c651d9 in operator() /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13 ROCm#24 0x3ff52c651d9 in call /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:463 ROCm#25 0x3ff5076df2f in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) /home/user/pytorch/aten/src/ATen/core /boxing/KernelFunction_impl.h:50 ROCm#26 0x3ff5009a93f in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core /boxing/KernelFunction_impl.h:103 ROCm#27 0x3ff5009a93f in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)> const&, at::Tensor const&) const /home/user/pytorch/aten/s rc/ATen/core/dispatch/Dispatcher.h:639 ROCm#28 0x3ff5009a93f in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)>::call(at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487 ROCm#29 0x3ff5009a93f in at::_ops::cos::call(at::Tensor const&) /home/user/pytorch/build/aten/src/ATen/Operators_0.cpp:2215 ROCm#30 0x3ff7d813741 in at::Tensor::cos() const /home/user/pytorch/build/aten/src/ATen/core/TensorBody.h:2107 ROCm#31 0x3ff7dc0f2b7 in operator() /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2953 ROCm#32 0x3ff7dc0faf7 in THPVariable_cos /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2955 ROCm#33 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 ROCm#34 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#35 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#36 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 ROCm#37 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#38 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#39 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#40 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#41 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#42 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#43 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#44 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/ torch/csrc/utils/python_dispatch.cpp:175 ROCm#45 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:: PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::operator()(c10::OperatorKernel*, c10::Op eratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87 ROCm#46 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:: PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::_FUN(c10::OperatorKernel*, c10::Operator Handle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86 ROCm#47 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/b oxing/BoxedKernel_impl.h:41 ROCm#48 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/cor e/boxing/KernelFunction_impl.h:43 ROCm#49 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:6 91 ROCm#50 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417 ROCm#51 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421 ROCm#52 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15 ROCm#53 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c1 0::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61 ROCm#54 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10:: IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111 ROCm#55 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290 ROCm#56 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-lin ux-gnu/11/include/g++-v11/bits/std_function.h:590 ROCm#57 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41 ROCm#58 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11:: kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764 ROCm#59 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829 ROCm#60 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549 ROCm#61 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::vo id_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439 ROCm#62 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /h ome/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408 ROCm#63 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249 ROCm#64 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224 ROCm#65 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929 ROCm#66 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 ROCm#67 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#68 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#69 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 ROCm#70 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#71 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#72 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#73 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#74 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#75 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#76 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 ROCm#77 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#78 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#79 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#80 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#81 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#82 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#83 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#84 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#85 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#86 0x3ffa5feb289 in call_function Python/ceval.c:5891 ROCm#87 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#88 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#89 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#90 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#91 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#92 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#93 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#94 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#95 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#96 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#97 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#98 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#99 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#100 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#101 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#102 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch /torch/csrc/utils/python_dispatch.cpp:175 ROCm#103 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch: :PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::operator()(c10::OperatorKernel*, c10::O peratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87 ROCm#104 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch: :PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::_FUN(c10::OperatorKernel*, c10::Operato rHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86 ROCm#105 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/ boxing/BoxedKernel_impl.h:41 ROCm#106 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/co re/boxing/KernelFunction_impl.h:43 ROCm#107 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h: 691 ROCm#108 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417 ROCm#109 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421 ROCm#110 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15 ROCm#111 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c 10::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61 ROCm#112 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10: :IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111 ROCm#113 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290 ROCm#114 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-li nux-gnu/11/include/g++-v11/bits/std_function.h:590 ROCm#115 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41 ROCm#116 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11: :kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764 ROCm#117 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829 ROCm#118 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549 ROCm#119 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439 ROCm#120 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> / home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408 ROCm#121 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249 ROCm#122 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224 ROCm#123 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929 ROCm#124 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 ROCm#125 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#126 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#127 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 ROCm#128 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#129 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#130 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#131 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#132 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#133 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#134 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 ROCm#135 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#136 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#137 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#138 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#139 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#140 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#141 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#142 0x3ffa5e87d2b in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#143 0x3ffa5e882dd in method_vectorcall Objects/classobject.c:83 ROCm#144 0x3ffa5e836d3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#145 0x3ffa5e84b6f in _PyObject_CallFunctionVa Objects/call.c:485 ROCm#146 0x3ffa5e84f2d in callmethod Objects/call.c:557 ROCm#147 0x3ffa5e85039 in PyObject_CallMethod Objects/call.c:577 ROCm#148 0x3ff7f7efa05 in torch::handle_torch_function_no_python_arg_parser(c10::ArrayRef<pybind11::handle>, _object*, _object*, char const*, _object*, char const*, torch::TorchFunctionName) /home/user/py torch/torch/csrc/utils/python_arg_parser.cpp:338 ROCm#149 0x3ff7eb09b67 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:827 ROCm#150 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549 ROCm#151 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439 ROCm#152 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> / home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408 ROCm#153 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249 ROCm#154 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224 ROCm#155 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929 ROCm#156 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543 ROCm#157 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#158 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#159 0x3ffa5feb50d in do_call_core Python/ceval.c:5915 ROCm#160 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#161 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#162 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#163 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#164 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#165 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#166 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 ROCm#167 0x3ffa5e84027 in _PyObject_MakeTpCall Objects/call.c:215 ROCm#168 0x3ffa5fd767b in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 ROCm#169 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#170 0x3ffa5feb289 in call_function Python/ceval.c:5891 ROCm#171 0x3ffa5fe5ad1 in _PyEval_EvalFrameDefault Python/ceval.c:4181 ROCm#172 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#173 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#174 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#175 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#176 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#177 0x3ffa5feb289 in call_function Python/ceval.c:5891 ROCm#178 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213 ROCm#179 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#180 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#181 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#182 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 ROCm#183 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#184 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#185 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#186 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#187 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#188 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#189 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#190 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#191 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#192 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#193 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#194 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#195 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#196 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#197 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#198 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#199 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#200 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#201 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#202 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#203 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#204 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#205 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#206 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255 ROCm#207 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#208 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#209 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#210 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#211 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#212 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#213 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#214 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142 ROCm#215 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431 ROCm#216 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494 ROCm#217 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305 ROCm#218 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#219 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#220 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#221 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#222 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#223 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#224 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114 ROCm#225 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123 ROCm#226 0x3ffa5feb289 in call_function Python/ceval.c:5891 ROCm#227 0x3ffa5fe5b21 in _PyEval_EvalFrameDefault Python/ceval.c:4198 ROCm#228 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#229 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#230 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#231 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 ROCm#232 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#233 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#234 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#235 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#236 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#237 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#238 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#239 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 ROCm#240 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#241 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#242 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#243 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#244 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#245 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#246 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#247 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 ROCm#248 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290 ROCm#249 0x3ffa5e84483 in PyObject_Call Objects/call.c:317 ROCm#250 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943 ROCm#251 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277 ROCm#252 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46 ROCm#253 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065 ROCm#254 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342 ROCm#255 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267 0x03ff70f54570 is located 0 bytes to the right of global variable 'Sleef_rempitabsp' defined in '/home/user/pytorch/third_party/sleef/src/libm/rempitab.c:986:34' (0x3ff70f53f00) of size 1648 SUMMARY: AddressSanitizer: global-buffer-overflow /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 in vgather_vf_p_vi2 Shadow bytes around the buggy address: 0x10007fee1ea850: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea860: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea870: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea890: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x10007fee1ea8a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[f9]f9 0x10007fee1ea8b0: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10007fee1ea8f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==2030580==ABORTING ``` </details> It reproduces when running `pytest -v test/test_ops.py -k test_python_ref__refs_cos_cpu_bfloat16` under address sanitizer on s390x. See also: shibatch/sleef#464 Pull Request resolved: pytorch#102266 Approved by: https://github.com/malfet
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Contains our latest PR.