Skip to content

Merge from upstream #196

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 56 commits into from
Sep 10, 2018
Merged

Merge from upstream #196

merged 56 commits into from
Sep 10, 2018

Conversation

iotamudelta
Copy link

No description provided.

ezyang and others added 30 commits September 6, 2018 20:11
…11323)

Summary:
Pull Request resolved: pytorch#11323

If you do pass it this, you'll get a pointer to
UndefinedTensor; probably not what you want!

Reviewed By: Yangqing

Differential Revision: D9676205

fbshipit-source-id: 0bd3c22c2c40ac2958f95fc7a73b908af291cf22
Summary:
This actually ended up being a lot more involved than I thought. The basic
problem is that in some of our build environments, thread local state is not
supported. The correct way to test if this is the case is using the
(undocumented) CAFFE2_FB_LIMITED_MOBILE_CAPABILITY macro.

On mobile, OptionGuard is not available, and you have to do everything
by hand. There's a static_assert to check if you accidentally use
OptionGuard in this case and give you a better error message in this case.

Pull Request resolved: pytorch#11244

Reviewed By: gchanan

Differential Revision: D9646190

fbshipit-source-id: cf4016f79b47705a96ee9b6142eb34c95abb2bd4
Summary:
Signed-off-by: Edward Z. Yang <[email protected]>
Pull Request resolved: pytorch#11361

Reviewed By: yf225

Differential Revision: D9696524

Pulled By: ezyang

fbshipit-source-id: f6801d6f4f34090d467b16810db9cf576d5d519b
Summary:
Pull Request resolved: pytorch#11308

Pull Request resolved: pytorch#11299

Reviewed By: xianjiec

Differential Revision: D9652844

fbshipit-source-id: 650d550317bfbed0c1f25ae7d74286cfc7c3ac70
…#11231)

Summary:
This adds an optional `expand=True` kwarg to the `distribution.expand_support()` method, to get a distribution's support without expanding the values over the distribution's `batch_shape`.
 - The default `expand=True` preserves the current behavior, whereas `expand=False` collapses the batch dimensions.

e.g.
```python
In [47]: d = dist.OneHotCategorical(torch.ones(3, 5) * 0.5)

In [48]: d.batch_shape
Out[48]: torch.Size([3])

In [49]: d.enumerate_support()
Out[49]:
tensor([[[1., 0., 0., 0., 0.],
         [1., 0., 0., 0., 0.],
         [1., 0., 0., 0., 0.]],

        [[0., 1., 0., 0., 0.],
         [0., 1., 0., 0., 0.],
         [0., 1., 0., 0., 0.]],

        [[0., 0., 1., 0., 0.],
         [0., 0., 1., 0., 0.],
         [0., 0., 1., 0., 0.]],

        [[0., 0., 0., 1., 0.],
         [0., 0., 0., 1., 0.],
         [0., 0., 0., 1., 0.]],

        [[0., 0., 0., 0., 1.],
         [0., 0., 0., 0., 1.],
         [0., 0., 0., 0., 1.]]])

In [50]: d.enumerate_support().shape
Out[50]: torch.Size([5, 3, 5])

In [51]: d.enumerate_support(expand=False)
Out[51]:
tensor([[[1., 0., 0., 0., 0.]],

        [[0., 1., 0., 0., 0.]],

        [[0., 0., 1., 0., 0.]],

        [[0., 0., 0., 1., 0.]],

        [[0., 0., 0., 0., 1.]]])

In [52]: d.enumerate_support(expand=False).shape
Out[52]: torch.Size([5, 1, 5])
```

**Motivation:**
 - Currently `enumerate_support` builds up tensors of size `support + batch_shape + event_shape`, but the values are *repeated* over the `batch_shape` (adding little in the way of information). This can lead to expensive matrix operations over large tensors when `batch_shape` is large (see, example above), often leading to OOM issues. We use `expand=False` in Pyro for message passing inference. e.g. when enumerating over the state space in a Hidden Markov Model. This creates sparse tensors that capture the markov dependence, and allows for the possibility of using optimized matrix operations over these sparse tensors. `expand=True`, on the other hand, will create tensors that scale exponentially in size with the length of the Markov chain.
 - We have been using this in our [patch](https://github.com/uber/pyro/blob/dev/pyro/distributions/torch.py) of `torch.distributions` in Pyro. The interface has been stable, and it is already being used in a few Pyro algorithms. We think that this is more broadly applicable and will be of interest to the larger distributions community.

cc. apaszke, fritzo, alicanb
Pull Request resolved: pytorch#11231

Differential Revision: D9696290

Pulled By: soumith

fbshipit-source-id: c556f8ff374092e8366897ebe3f3b349538d9318
Summary:
After submitting PR pytorch#9726, PR pytorch#10581 created a different CUDAEvent class. The CUDAEvent proposed in pytorch#9726 was similar to the c10d::CUDAEvent class with additional testing and functionality. In particular, it was movable but not copyable. The CUDAEvent created by pytorch#10581 is refcounted and copyable. This PR retains the refcounting of the latter PR while fixing several bugs, adding tests, and extending the functionality to support testing and usage like in PR pytorch#8354. In particular, this PR:

- Adds set_device() to CUDAContext
- Adds three CUDAEvent tests to stream_test.cpp
- Fixes three bugs:
- Refcounting was broken. Destroying an of the RAIIs holding a particular CUDAEvent would destroy the event UNLESS it was the last RAII (the check was backwards).
- Moving an event would cause a segfault.
- Events were not destroyed on the device they were created on. See PR pytorch#9415 (pietern)
- Adds the happened() and recordOnce() functions
- Changes the record() functions to not be const
- Adds additional assertions to verify correctness

This PR does not:

- Make c10d use the ATen CUDAEvent (this is appropriate for a separate PR)

Whether events should be refcounted is an interesting question. It adds some atomic operations and makes event creation eager. Making events movable but not copyable (like the c10d events) avoids these costs and allows events to be lazily constructed. Lazy construction is preferable when working with containers (like std::array or std::vector) and because the event's device can be set automatically to the first stream it's recorded on. With eager construction the user is required to understand that events have a device and acquire the device of the stream the event will be recorded on upfront. This can be seen here:

https://github.com/pytorch/pytorch/blob/542aadd9a7609892e207c1e15de08a975b697752/aten/src/ATen/native/cudnn/RNN.cpp#L1130-L1132

and that file is the only one which currently uses the ATen CUDAEvent.

Refcounting does allow single writer multi-reader scenarios, although these scenarios can be also be supported by providing indirect access to the underlying CUDAEvent. I believe all current and planned usage scenarios do not require refcounting, and if desired I can update this PR to remove refcounting and make the ATen event movable but not copyable like the c10d event. I think not refcounting is preferable because it can improve performance, ease usability, and simplify the code (as seen with two of the above bugs).

I have decided to separate this from PR pytorch#8354 since while it's required for PR pytorch#8354 the changes are, clearly, of independent interest. PR pytorch#8354 has a new dependency on this one, however. I am closing PR pytorch#9726 in favor of this PR.

apaszke ezyang pietern
Pull Request resolved: pytorch#11293

Differential Revision: D9665836

Pulled By: soumith

fbshipit-source-id: a1513fa4f9761e2f304d126e402f6b6950e1c1d2
Summary:
Fixes some minor grammar issues in the code base.

PS: I was actually looking for the following one but couldn't find it via grepping in this repo:

![screen shot 2018-09-06 at 3 27 39 pm](https://user-images.githubusercontent.com/5618407/45184280-1e16a980-b1ec-11e8-9cb1-87a96738bdd1.png)

Any idea in which file this issue is raised?
Pull Request resolved: pytorch#11344

Differential Revision: D9696454

Pulled By: soumith

fbshipit-source-id: 8ffe494b1bf1efb0e35563381d9da2e1e8032a3c
Summary:
- Added a note to the doc string for `svd`.
Pull Request resolved: pytorch#11194

Differential Revision: D9683250

Pulled By: soumith

fbshipit-source-id: 2d2c120be346122afa333629c0516a5c9dbb406f
… copy (pytorch#11351)

Summary:
Pull Request resolved: pytorch#11351

When partitions == 1 (InputSize() == OutputSize()), LengthsPartition becomes just a copy.

Reviewed By: aazzolini

Differential Revision: D9693409

fbshipit-source-id: a9ea034d227af357b661477ab779a71600f58f58
Summary:
Pull Request resolved: pytorch#11250

```
codemod -d . --extensions cc,cpp,cu,cuh,h getMaybeVariableType getType
```

Reviewed By: gchanan

Differential Revision: D9648830

fbshipit-source-id: 6b2ac2b1c265ae47722390e6e7f106653077d851
Summary:
Pull Request resolved: pytorch#11270

Still need to deduplicate this with caffe2/core/registry.h,
but this will be a bit tricky because the current formulation
of the macro is namespace sensitive (i.e., the macro for classes
defined in at:: namespace won't work if you call from caffe2::
namespace).

Reviewed By: gchanan

Differential Revision: D9654871

fbshipit-source-id: 2207d1f2cc6d50bd41bf64ce0eb0b8523b05d9d9
Summary:
Pull Request resolved: pytorch#11273

This one might strike you as a bit surprising, but it's necessary
to expose this interface in ATen/core, because we need to be
able to get a true Variable type from Variable tensors, and
to do that we need to go through the hooks interface.

Reviewed By: gchanan

Differential Revision: D9656548

fbshipit-source-id: 28bb5aee6ac304e8cd5fa1e4c65452c336647161
Summary:
This is so that TensorImpl does not have to depend on Tensor.
Pull Request resolved: pytorch#11337

Differential Revision: D9684421

Pulled By: gchanan

fbshipit-source-id: d2af93420ca6d493429c251cfe5a34e9289c4484
Summary:
Add the gpu kernel version.

The parallelism I went with performs poorly when there are a large number of vectors, but they're all short, as I don't allocate the thread pool to wrap in that case.

Test Plan
---------
```
python -m unittest test_torch.TestTorch.test_pdist_{empty,scipy} test_nn.TestNN.test_pdist{,_zeros,_empty_row,_empty_col,_cpu_gradgrad_unimplemented,_cuda_gradgrad_unimplemented} test_jit.TestJitGenerated.test_nn_pdist
```

Current performance specs are a little underwhelming, I'm in the process of debugging.

size | torch | torch cuda | scipy
-----|-------|------------|------
16 x 16 | 9.13 µs ± 3.55 µs | 9.86 µs ± 81.5 ns | 15.8 µs ± 1.2 µs
16 x 1024 | 15 µs ± 224 ns | 9.48 µs ± 88.7 ns | 88.7 µs ± 8.83 µs
1024 x 16 | 852 µs ± 6.03 µs | 7.84 ms ± 6.22 µs | 4.7 ms ± 166 µs
1024 x 1024 | 34.1 ms ± 803 µs | 11.5 ms ± 6.24 µs | 273 ms ± 6.7 ms
2048 x 2048 | 261 ms ± 3.5 ms | 77.5 ms ± 41.5 µs | 2.5 s ± 97.6 ms
4096 x 4096 | 2.37 s ± 154 ms | 636 ms ± 2.97 µs | 25.9 s ± 394 ms
Pull Request resolved: pytorch#11102

Differential Revision: D9697305

Pulled By: erikbrinkman

fbshipit-source-id: 2b4f4b816c02b3715a85d8db3f4e77479d19bb99
Summary:
also a missing space in fft error message
Pull Request resolved: pytorch#11320

Differential Revision: D9676012

Pulled By: SsnL

fbshipit-source-id: a636e5fce042198510c8e456fa51fde714da8348
…#11152)

Summary:
This PR cleans up the `at::Tensor` class by removing all methods that start with an underscore in favor of functions in the `at::` namespace. This greatly cleans up the `Tensor` class and makes it clearer what is the public and non-public API.

For this I changed `native_functions.yaml` and `Declarations.cwrap` to make all underscore methods `variant: function` (or add such a statement to begin with), and then fixed all code locations using the underscore methods.

ezyang colesbury gchanan
Pull Request resolved: pytorch#11152

Differential Revision: D9683607

Pulled By: goldsborough

fbshipit-source-id: 97f869f788fa56639c05a439e2a33be49f10f543
Summary:
Pull Request resolved: pytorch#11336

Move `context_base.h` header to `ATen/core` and the implementations are in `caffe2/core/context_base.cc`

Reviewed By: ezyang

Differential Revision: D9670493

fbshipit-source-id: ce5bf2b3b4c80e9b62819f4332ce68af82720055
…eation (pytorch#11377)

Summary:
Closes pytorch#9963
Pull Request resolved: pytorch#11377

Differential Revision: D9701824

Pulled By: soumith

fbshipit-source-id: 89c5448fd90ece1b365dc42f775b6b0c73ce790c
Summary:
fix typo
Pull Request resolved: pytorch#11370

Differential Revision: D9701777

Pulled By: soumith

fbshipit-source-id: 9f3986cf30ae0491e79ca4933c675a99d6078982
Summary:
The next function I'm moving to C++ is `sync_params`. It is stacked on top of pytorch#9729, so some changes will go away when it lands and I rebase.

I also split code into a `.h` and `.cpp` file for better code organization.

The controller you requested could not be found. pietern apaszke
Pull Request resolved: pytorch#9805

Differential Revision: D9688604

Pulled By: goldsborough

fbshipit-source-id: 4467104d3f9e2354425503b9e4edbd59603e20a8
Summary:
Pull Request resolved: pytorch#11382

We found this cudnn bug in S163230 that causes accuracy loss. We fix this in D9601217, but due to the reimplementation of spatialBN it's overwritten. Let's land this fix again.

Reviewed By: kuttas

Differential Revision: D9702347

fbshipit-source-id: 11547e9edaf7b2ba7f4aa7263ffb4f0281bbf078
Summary:
Add a barrier() to wait for all PG created before destroy
Pull Request resolved: pytorch#11391

Differential Revision: D9727383

Pulled By: teng-li

fbshipit-source-id: 689d62c978e642b68f4949dcf29982e34869ada4
Summary: Pull Request resolved: pytorch#11366

Differential Revision: D9723305

Pulled By: wanchaol

fbshipit-source-id: 9e7e2e7e68cb4919610bccfbf76fa33b647f6eb7
Summary:
In addition to documentation, this cleans up a few error message formats.
It also adds infra to find which operators are supported by the JIT automatically, which is then used in the generation of the docs.

The wording and formatting of the docs is not yet polished, but having this will allow our document writers to make faster progress.

Followup PRs will polish the docs and fix formatting issues.
Pull Request resolved: pytorch#11357

Differential Revision: D9721277

Pulled By: zdevito

fbshipit-source-id: 153a0d5be1efb314511bcfc0cec48643d78ea48b
Summary:
1. Remove cudnn* symbols from C++ docs
2. Fix code examples for `nn::Module` and `jit::compile`
3. Document Dropout
Pull Request resolved: pytorch#11347

Differential Revision: D9716751

Pulled By: goldsborough

fbshipit-source-id: e0566cec35848335cac3eb9196cb244bb0c8fa45
Summary: Pull Request resolved: pytorch#11393

Differential Revision: D9725444

Pulled By: SsnL

fbshipit-source-id: b1607d986ab93e64b0b0ff9e8f10d9e3f6e2160e
Summary:
Continuing pjh5's work to remove FULL_CAFFE2 flag completely.

With these changes you'll be able to also do something like

```
NO_TEST=1 python setup.py build_deps
```
and this will skip building tests in caffe2, aten, and c10d. By default the tests are built.

cc mingzhe09088 Yangqing
Pull Request resolved: pytorch#11321

Reviewed By: mingzhe09088

Differential Revision: D9694950

Pulled By: orionr

fbshipit-source-id: ff5c4937a23d1a263378a196a5eda0cba98af0a8
Summary:
This seems to be causing different versions of OpenMPI being picked up
by different parts of the build. Not a good practice to include absolute
paths anyway, so let's try removing it.
Pull Request resolved: pytorch#11386

Reviewed By: teng-li

Differential Revision: D9724349

Pulled By: pietern

fbshipit-source-id: 3dfef91c81f2e97e5125284aff9e7e98f8761917
Summary:
~~This PR fixes pytorch#8525 by renaming `split_with_sizes` to `split` so that 2 `aten::split` ops are
generated (previously `aten::split(self, int, int)` and `aten::split_with_sizes(self, int[], int)` were generated)~~

~~`split_with_sizes` was made in PR pytorch#5443, but I don't see a reason for it to have
a different name than `split` rather than just overload `split`.~~

This PR fixes pytorch#8525 by adding `register_special_ops.cpp` to mirror Python dispatching from `split` to `split` and `split_with_sizes` in [tensor.py](https://github.com/pytorch/pytorch/blob/master/torch/tensor.py#L279).

It also fixes pytorch#8520 by adding an `int[]` wherever it sees `torch.Size`

In a follow up PR this could also be used to fix some of the other `unknown builtin op` test errors.
Pull Request resolved: pytorch#11051

Differential Revision: D9582443

Pulled By: driazati

fbshipit-source-id: d27201f85937d72e45e851eaa1460dd3dd1b61a9
Summary:
Pull Request resolved: pytorch#11392

Fix igios build

Reviewed By: houseroad

Differential Revision: D9720833

fbshipit-source-id: 33acc3c658c22addd4bad142433824076233e901
goldsborough and others added 25 commits September 7, 2018 16:56
Summary:
Moves the code for the complex registration code into an out-of-line C++ extension to de-noise the test_cpp_extensions.py file. Let's keep it nice and tidy so we can point our users at it for usage examples.

ezyang
Pull Request resolved: pytorch#11397

Differential Revision: D9725335

Pulled By: goldsborough

fbshipit-source-id: 290618f2ee711b1895cdb8f05276034dfe315c6d
Summary:
This is mainly to pick up the change google/googletest@20074be to avoid polluting the CMAKE_DEBUG_POSTFIX variable. cc orionr .
Pull Request resolved: pytorch#11388

Reviewed By: orionr

Differential Revision: D9720931

Pulled By: Yangqing

fbshipit-source-id: 18a60d0409e74316f74d364f4fe16bf0d0198413
…ease-1.8.1

Differential Revision:
D9720931

Original commit changeset: 18a60d0409e7

fbshipit-source-id: a05dcba71277eb4f8ac38886f307d6cf6e6955a9
Summary:
Pull Request resolved: pytorch#11247

Previously, the default for a declaration in native_functions.yaml
was ['function', 'method'], i.e., generate both a method and
function for every binding.  We now believe this is inappropriate:
the majority of new kernels added to PyTorch should live as
free functions, NOT methods.  Thus, we change the default accordingly.

I also took the opportunity to de-method some "internal" functions
that had a leading underscore.  While, strictly speaking, this is a
BC breaking change, I believe it is highly unlikely anyone was using
these directly.

Reviewed By: yf225

Differential Revision: D9648570

fbshipit-source-id: 8b94647b824e0899d6d18aa5585aaedc9d9957d2
Summary:
Currently gradient is copied into .grad if it is None. This PR aim to remove the copy when it is not absolutely needed.

It is generally an improvement of speed and memory usage. And here is a case it may help a lot:
Normally, people do optimizer.zero_grad() every minibatch before backward. It will translate into a memset, and later a point-wise add.
When there is some large weight in the network, one optimization people can always do is set parameter.grad to None instead of zero_grad. This will remove memset and change point-wise add to a memcpy.
Here is result running following script on V100 GPU. It is 100 iterations of forward/backward/zero_grad on single 1-billion word benchmark size embedding.
`Zero grad: 2.123847723007202`
`None grad: 1.3342866897583008`

With the backend change of this PR, the unnecessary memcpy is removed, thus further speed up is achieved.
`Zero grad: 2.124978542327881`
`None grad: 0.4396955966949463`

[benchmark.txt](https://github.com/pytorch/pytorch/files/2341800/benchmark.txt)

Some details on the code change:
.detach() is used because we need to get rid of new_grad being a view without copy data. This should be safe in first-order only mode.
data need to be contiguous, otherwise `grad_variable.data() += new_grad.data();` below will fail.
Only the last variable that has reference to the temp gradient will grab its buffer.

ngimel, mcarilli  and mruberry helped on finalizing this PR.
Pull Request resolved: pytorch#11165

Differential Revision: D9728874

Pulled By: soumith

fbshipit-source-id: b8fb822a2dff6e812bbddd215d8e384534b2fd78
Summary: Pull Request resolved: pytorch#11126

Differential Revision: D9727689

Pulled By: jamesr66a

fbshipit-source-id: f142257a2fba27d86844bf33084174f1f68a8ca5
Summary:
This change removes the skips for the existing send/recv tests in the backwards compatibility layer.
Pull Request resolved: pytorch#11387

Reviewed By: teng-li

Differential Revision: D9729330

Pulled By: pietern

fbshipit-source-id: f8899219a94d806386d03e9ef53bff622d8658a3
Summary:
to 300 seconds to be safe. It used to be no timeout in THD
Pull Request resolved: pytorch#11409

Differential Revision: D9731709

Pulled By: teng-li

fbshipit-source-id: 0ce011dcca507cbf063176ad4995405c77dd0cdd
Summary:
cc gchanan apaszke
Pull Request resolved: pytorch#11040

Differential Revision: D9565728

Pulled By: SsnL

fbshipit-source-id: eb5be9609f30c88f52746fa7e13ad71e2856648e
…rch#11274)

Summary:
Pull Request resolved: pytorch#11274

We don't want to put all of Context into ATen/core, but one
particular part cannot be avoided: the type registry, because
implementations of TensorMethods will need to get a Type,
and then do a virtual call on it.

I needed to do a little bit of (temporary) footwork to get this
in without also moving Type, because unique_ptr<Type> expects
to be able to see the destructor of Type (but it's forward declared
right now).  So instead I put the destructor as an explicit functor.  We
can get rid of this once Type actually moves in ATen/core

Reviewed By: cpuhrsch

Differential Revision: D9657449

fbshipit-source-id: 940931493bf4f1f6a8dad03f34633cacdd63dd0b
…ch#11331)

Summary:
Pull Request resolved: pytorch#11331

In the previous commit, we added a bare-bones LegacyTypeDispatch in ATen/core.
This is not sufficient for the use cases we need: we not only need to be able to
get a Type, but we also need to be able to *initialize* the Types if its the first time
we have retrieved a CPU/CUDA/Complex type. I hemmed and hawed about how
to do this; the strategy this PR takes is to introduce a new "hooks" interface
specifically for initializing CPU/CUDA/Complex (which still lives in Context). We then
move all "user-friendly" functions to LegacyTypeDispatch.

Here were some other options which I considered, but don't work:
- Assume that Type is already initialized, because we only intend to call Type
  from Tensor methods, where we already have a Tensor. This does not work
  because Caffe2 created tensors will not have gone through the standard
  Type codepath, and will have skipped initialization.
- Move CUDAHooks and ComplexHooks to ATen/core. Besides being sucky,
  this isn't even a complete fix, because I still need to initialize CPU hooks
  (so you *still* need another hooks interface).

Reviewed By: cpuhrsch

Differential Revision: D9666612

fbshipit-source-id: ac7004b230044b67d13caa81fdfaf3c6ab915e3f
Summary:
vishwakftw Your patch needed some updates because the default native function dispatches changed from `[function, method]` to `[function]`. The CI was run before that change happened so it still shows green, but the internal test caught it.

I did some changes when rebasing and updating so I didn't just force push to your branch. Let's see if this passes CI and internal test. If it does, let me know if you want me to force push to your branch or use this PR instead.

Note to reviewers: patch was already approved at pytorch#10068 .

cc yf225
Pull Request resolved: pytorch#11421

Differential Revision: D9733407

Pulled By: SsnL

fbshipit-source-id: cf2ed293bb9942dcc5158934ff4def2f63252599
Summary:
Pull Request resolved: pytorch#11420

Surprisingly tricky!  Here are the major pieces:

- We grow a even yet more ludicrous macro
  AT_FORALL_SCALAR_TYPES_WITH_COMPLEX_EXCEPT_COMPLEX_HALF
  which does what it says on the tin.  This is because I was
  too lazy to figure out how to define the necessary conversions
  in and out of ComplexHalf without triggering ambiguity problems.
  It doesn't seem to be as simple as just Half.  Leave it for
  when someone actually wants this.

- Scalar now can hold std::complex<double>.  Internally, it is
  stored as double[2] because nvcc chokes on a non-POD type
  inside a union.

- overflow() checking is generalized to work with complex.
  When converting *to* std::complex<T>, all we need to do is check
  for overflow against T.  When converting *from* complex, we
  must check (1) if To is not complex, that imag() == 0
  and (2) for overflow componentwise.

- convert() is generalized to work with complex<->real conversions.
  Complex to real drops the imaginary component; we rely on
  overflow checking to tell if this actually loses fidelity. To get
  the specializations and overloads to work out, we introduce
  a new Converter class that actually is specializable.

- Complex scalars convert into Python complex numbers

- This probably fixes complex tensor printing, but there is no way
  to test this right now.

Signed-off-by: Edward Z. Yang <[email protected]>

Reviewed By: cpuhrsch

Differential Revision: D9697878

Pulled By: ezyang

fbshipit-source-id: 181519e56bbab67ed1e5b49c691b873e124d7946
Summary:
Fixes the issue discussed in pytorch#10838. `hidden_size` should be the last dimension regardless if we're in ONNX or PyTorch.
Pull Request resolved: pytorch#11368

Differential Revision: D9734814

Pulled By: soumith

fbshipit-source-id: 7f69947a029964e092c7b88d1d79b188a417bf5f
Summary:
If pybind is build with cmake and installed, we should use config file instead of the Findpybind11 shipped with caffe2.
Pull Request resolved: pytorch#11423

Differential Revision: D9735557

Pulled By: ezyang

fbshipit-source-id: 28a39e579fa045060aa1a716e5fd7dbcf7b89569
Summary:
Added AVX optimizations for pdist using Vec256. This brings single threaded performance up to speed with scipy, but the current implementation greatly hurts performance without AVX enabled. Is there a way to special case out AVX on dispatch and call the non Vec256 code? Or is the way I used Vec256 completely wrong?

Single threaded comparison to scipy
============================

This is the time to compute the pdist of a 2048 x 2048 float matrix with only one thread for various values of p between torch and scipy. p = 3 is the code path for arbitrary p, and so is much slower than the other values.

p | torch | scipy
-----|-----------|------
0 | 6.27 s ± 393 ms | 7.23 s ± 498 ms
1 | 5.49 s ± 201 ms | 43.4 s ± 1.09 s
2 | 5.74 s ± 474 ms | 53.8 s ± 3.52 s
∞ | 5.59 s ± 292 ms | 47.4 s ± 2.03 s
3 | really slow | gave up

Result by AVX support
================

This is the time to compute the distance and gradient of a 2048 x 2048 float matrix with all threads by AVX support. `before` is the old code, `default` is no AVX support, etc. Interestingly the AVX optimizations provided a great benefit over the old unoptimized code, but drastically hurt performance when compiled without AVX optimizations. p = 3 is the code path for arbitrary p, and so is much slower than the other values.

Results for p = 0
----------------

avx | dist | grad
----|------|-----
before | 514 ms ± 87.5 ms | 191 µs ± 35 µs
default | 3.47 s ± 183 ms | 201 µs ± 24.6 µs
avx | 123 ms ± 18.2 ms | 281 µs ± 130 µs
avx2 | 103 ms ± 11.4 ms | 216 µs ± 74.4 µs

Results for p = 1
----------------

avx | dist | grad
----|------|-----
before | 426 ms ± 35 ms | 6.21 s ± 187 ms
default | 2.6 s ± 123 ms | 5.62 s ± 273 ms
avx | 104 ms ± 6.37 ms | 833 ms ± 44.3 ms
avx2 | 106 ms ± 3.59 ms | 924 ms ± 86.2 ms

Results for p = 2
-----------------

avx | dist | grad
----|------|-----
before | 425 ms ± 45.4 ms | 6.31 s ± 125 ms
default | 3.04 s ± 187 ms | 3.55 s ± 242 ms
avx | 110 ms ± 3.66 ms | 896 ms ± 21.8 ms
avx2 | 113 ms ± 4.68 ms | 934 ms ± 25.2 ms

Results for p = ∞
------------------

avx | dist | grad
----|------|-----
before | 501 ms ± 39.5 ms | 6.64 s ± 321 ms
default | 2.15 s ± 92.9 ms | 8.43 s ± 355 ms
avx | 104 ms ± 5.52 ms | 835 ms ± 36.7 ms
avx2 | 100 ms ± 3.41 ms | 864 ms ± 67 ms

Results for p = 3
-----------------

avx | dist | grad
----|------|-----
before | 22.6 s ± 413 ms | 11.1 s ± 242 ms
default | 24.9 s ± 1 s | 11.2 s ± 293 ms
avx | 2.69 s ± 148 ms | 5.63 s ± 88.4 ms
avx2 | 2.48 s ± 31.8 ms | 5.61 s ± 114 ms
Pull Request resolved: pytorch#11230

Differential Revision: D9735503

Pulled By: erikbrinkman

fbshipit-source-id: a9da619249e4ca2625b39ca1ca7f5543c3086bfb
Summary:
This PR is just a copy-paste of the upstream FindCUDA.cmake. Since, cublas_device is deprecated in CUDA >= 9.2, this change is necessary for build.

Related: https://gitlab.kitware.com/cmake/cmake/merge_requests/2298
Pull Request resolved: pytorch#11406

Differential Revision: D9735563

Pulled By: ezyang

fbshipit-source-id: c74d86ced7cc485cb2233f9066ce23e921832c30
Summary:
This PR parallels `masked_fill` on CPU, currently it runs in sequential on CPU.

the following script is used to benchmark and verify this PR. On Xeon skylake 8180 (2 sockets * 28 cores),
 it runs `4.20` sec without the PR and `0.11` sec with the PR.

```python
import torch
import random
from time import time

size = 10 * 1000 * 1000
count = 100

def test_masked_fill():
    dst = torch.randn(size)
    dst_ = dst.clone()
    mask = torch.rand(size).mul(2).floor().byte()
    val = random.random()

    tstart = time()
    for i in range(count):
        dst.masked_fill_(mask, val)
    tend = time()
    print("masked_fill_: %f" % (tend-tstart))

    for i in range(size):
        if mask[i]:
            if dst[i] != val:
                print("fail")
        else:
            if dst[i] != dst_[i]:
                print("fail1")
    print("test_masked_fill: PASS")

test_masked_fill()
```
Pull Request resolved: pytorch#11359

Differential Revision: D9735578

Pulled By: ezyang

fbshipit-source-id: d437ad7c6dace1910d0c18d6d9ede80efb44fae4
Summary:
A recent build regression is that we need a system GoogleTest for builds to pass.

This was because, when building with Gloo, gloo is trying to build it's own tests, which look for system gtest [here](https://github.com/facebookincubator/gloo/blob/master/cmake/Dependencies.cmake#L72-L80) (because we're not using full cmake build and making it aware of third_party/GoogleTest, but instead, we are building it isolated using tools/build_pytorch_libs.sh

Traditionally, we didn't ask Gloo to build it's tests, but because we added `-DBUILD_TEST=1` by default to all builds (in refactoring variable names), we accidentally started asking Gloo to build it's tests.

This PR overrides the Gloo flags and asks it to not build tests (like it used to)
Pull Request resolved: pytorch#11431

Differential Revision: D9736387

Pulled By: soumith

fbshipit-source-id: 59e84edae780123b793bdaea5fd9ac46156cd0af
Summary:
as discussed with ezyang and slayton58 , this might be a nice convenience to be able to use code in extensions just as in ATen.

also split off `tracing_state.h` from `torch/jit/tracer.h` fix pytorch#11204 to bee able to use the utility functions

pytorchbot  it's not a jit patch per se.
Pull Request resolved: pytorch#11425

Differential Revision: D9735556

Pulled By: ezyang

fbshipit-source-id: 466c92bbdb1d7d7a970eba1c26b7583fe9756139
…torch#11435)

Summary:
Same issue as pytorch#10379, just in a different place (adding this resolves it)
Pull Request resolved: pytorch#11435

Differential Revision: D9736396

Pulled By: soumith

fbshipit-source-id: 220a52b8009fc2bee9313c5a091443c68f85f62f
Summary:
`Process.start()` actually take some time as it needs to start a
process and pass the arguments over via a pipe. Therefore, we
only add a worker to self.workers list after it started, so
that we do not call `.join()` if program dies before it starts,
and `__del__` tries to join it but will get:
    AssertionError: can only join a started process.

Example trace when such error happens:
```py
[unrelated]
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 500, in __iter__
    return _DataLoaderIter(self)
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 292, in __init__
    w.start()
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/process.py", line 112, in start
    self._popen = self._Popen(self)
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/context.py", line 277, in _Popen
    return Popen(process_obj)
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
    self._launch(process_obj)
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/popen_fork.py", line 70, in _launch
    self.pid = os.fork()
KeyboardInterrupt
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7fa704d5aa60>
Traceback (most recent call last):
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 398, in __del__
    self._shutdown_workers()
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 392, in _shutdown_workers
    w.join()
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/process.py", line 139, in join
    assert self._popen is not None, 'can only join a started process'
AssertionError: can only join a started process
```

No test because hard to reliably trigger.
Pull Request resolved: pytorch#11432

Reviewed By: ezyang

Differential Revision: D9735430

Pulled By: SsnL

fbshipit-source-id: a8912d9bb4063f210d6236267b178173810e2351
Summary: Pull Request resolved: pytorch#11444

Differential Revision: D9736992

Pulled By: SsnL

fbshipit-source-id: bf5320e878c6ef71468f3e2aa12ce304b92d45ca
Summary: Pull Request resolved: pytorch#11440

Differential Revision: D9736565

Pulled By: gchanan

fbshipit-source-id: 1e66f54f1c87084f37c0b014030f0d6d2f8dfaee
@iotamudelta iotamudelta requested a review from ezyang as a code owner September 10, 2018 16:23
@iotamudelta iotamudelta merged commit 6868c98 into ROCm:master Sep 10, 2018
lcskrishna pushed a commit to lcskrishna/pytorch that referenced this pull request May 15, 2023
When tensor is resized, reference array to it's sizes may become invalid. Make a copy in advance.

<details>
<summary>ASAN report</summary>

```
=================================================================
==1115867==ERROR: AddressSanitizer: heap-use-after-free on address 0x61000013d790 at pc 0x03ff8e7da360 bp 0x03fff53c83a0 sp 0x03fff53c8390
READ of size 8 at 0x61000013d790 thread T0
    #0 0x3ff8e7da35f in c10::SymInt::is_heap_allocated() const /home/user/pytorch/c10/core/SymInt.h:154
    ROCm#1 0x3ff8e7da35f in c10::SymInt::maybe_as_int() const /home/user/pytorch/c10/core/SymInt.h:215
    ROCm#2 0x3ff8e7d0a6d in c10::SymInt::sym_eq(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.cpp:69
    ROCm#3 0x3ff7a9ab0bd in c10::SymInt::operator==(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.h:177
    ROCm#4 0x3ff7a9aaedd in bool std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-
v11/bits/stl_algobase.h:1162
    ROCm#5 0x3ff7a9aae4b in bool std::__equal_aux1<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/
stl_algobase.h:1211
    ROCm#6 0x3ff7a9aae05 in bool std::__equal_aux<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/s
tl_algobase.h:1219
    ROCm#7 0x3ff7a9aad97 in bool std::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_alg
obase.h:1556
    ROCm#8 0x3ff4b23c771 in c10::ArrayRef<c10::SymInt>::equals(c10::ArrayRef<c10::SymInt>) const /home/user/pytorch/c10/util/ArrayRef.h:188
    ROCm#9 0x3ff4cb91bc1 in bool c10::operator!=<c10::SymInt>(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>) /home/user/pytorch/c10/util/ArrayRef.h:341
    ROCm#10 0x3ff6d1b57ff in torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/torch/csrc/autograd/Variab
leTypeManual.cpp:408
    ROCm#11 0x3ff6d1e59c7 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
> >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
    ROCm#12 0x3ff6d1e59c7 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::Sy
mInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::Disp
atchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
    ROCm#13 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    ROCm#14 0x3ff51ca6e8f in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    ROCm#15 0x3ff51ca6e8f in at::Tensor const& c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Ten
sor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)
const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
    ROCm#16 0x3ff5182006b in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c
10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
    ROCm#17 0x3ff5182006b in at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2144
    ROCm#18 0x3ff6d1d5e07 in at::redispatch::resize__symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/RedispatchFunctions.h:2847
    ROCm#19 0x3ff6d1bbb67 in torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pyto
rch/torch/csrc/autograd/VariableTypeManual.cpp:243
    ROCm#20 0x3ff6d1bd197 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10
::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFu
nctionIntoFunctor.h:13
    ROCm#21 0x3ff6d1bd197 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor
 const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c
10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor
.h:480
    ROCm#22 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    ROCm#23 0x3ff5181ead1 in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    ROCm#24 0x3ff5181ead1 in at::Tensor const& c10::Dispatcher::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor co
nst& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/at
en/src/ATen/core/dispatch/Dispatcher.h:639
    ROCm#25 0x3ff5181ead1 in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>,
c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
    ROCm#26 0x3ff5181ead1 in at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2137
    ROCm#27 0x3ff79b44fcf in at::Tensor::resize__symint(c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const aten/src/ATen/core/TensorBody.h:2452
    ROCm#28 0x3ff79a802db in torch::autograd::THPVariable_resize_(_object*, _object*, _object*)::$_0::operator()(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/us
er/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13417
    ROCm#29 0x3ff7999f1eb in torch::autograd::THPVariable_resize_(_object*, _object*, _object*) /home/user/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13419
    ROCm#30 0x3ffa2c9b009 in method_vectorcall_VARARGS_KEYWORDS Objects/descrobject.c:344
    ROCm#31 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#32 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#33 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#34 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    ROCm#35 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#36 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#37 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#38 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#39 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#40 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    ROCm#41 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    ROCm#42 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#43 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#44 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#45 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#46 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#47 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#48 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    ROCm#49 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    ROCm#50 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#51 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#52 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#53 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#54 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#55 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#56 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#57 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    ROCm#58 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#59 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#60 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#61 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#62 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#63 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#64 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#65 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#66 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#67 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#68 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#69 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#70 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#71 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#72 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#73 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    ROCm#74 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#75 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#76 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#77 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#78 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#79 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#80 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#81 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#82 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#83 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#84 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#85 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#86 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#87 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#88 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#89 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#90 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#91 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#92 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#93 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#94 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#95 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#96 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#97 0x3ffa2c8ab9b in PyVectorcall_Call Objects/call.c:267
    ROCm#98 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#99 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    ROCm#100 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    ROCm#101 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#102 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#103 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#104 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#105 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    ROCm#106 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#107 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    ROCm#108 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    ROCm#109 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    ROCm#110 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#111 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#112 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#113 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#114 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#115 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#116 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#117 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#118 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#119 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    ROCm#120 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#121 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#122 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#123 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#124 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#125 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    ROCm#126 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    ROCm#127 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#128 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#129 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#130 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#131 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#132 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#133 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#134 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#135 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#136 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#137 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#138 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#139 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#140 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#141 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#142 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#143 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#144 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#145 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#146 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#147 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    ROCm#148 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#149 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    ROCm#150 0x3ffa2c8ad17 in _PyObject_Call Objects/call.c:305
    ROCm#151 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    ROCm#152 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    ROCm#153 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#154 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#155 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#156 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#157 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#158 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#159 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#160 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#161 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#162 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#163 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#164 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#165 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#166 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#167 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#168 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#169 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#170 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#171 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#172 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#173 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#174 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#175 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    ROCm#176 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    ROCm#177 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#178 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#179 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#180 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#181 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#182 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#183 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#184 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#185 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#186 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#187 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#188 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#189 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#190 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#191 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#192 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#193 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#194 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#195 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#196 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#197 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    ROCm#198 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    ROCm#199 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#200 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#201 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#202 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#203 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#204 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#205 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#206 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#207 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#208 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#209 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#210 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#211 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#212 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#213 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#214 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#215 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#216 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#217 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#218 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#219 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    ROCm#220 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#221 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    ROCm#222 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    ROCm#223 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    ROCm#224 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#225 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#226 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#227 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#228 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#229 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#230 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#231 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#232 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    ROCm#233 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    ROCm#234 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#235 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#236 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#237 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#238 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#239 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#240 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#241 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#242 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#243 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#244 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#245 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#246 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#247 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#248 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#249 0x3ffa2e05447 in call_function Python/ceval.c:5891
    ROCm#250 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#251 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#252 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#253 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#254 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    ROCm#255 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#256 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    ROCm#257 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215

0x61000013d790 is located 80 bytes inside of 192-byte region [0x61000013d740,0x61000013d800)
freed by thread T0 here:
    #0 0x3ffa3237de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    ROCm#1 0x3ff8e7e3221 in c10::TensorImpl::~TensorImpl() /home/user/pytorch/c10/core/TensorImpl.cpp:75

previously allocated by thread T0 here:
    #0 0x3ffa323734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    ROCm#1 0x3ff4aeeb3d1 in c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_null_type<c10::TensorImpl> > c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_nul
l_type<c10::TensorImpl> >::make<c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >, c10::DispatchKeySet&, caffe2::TypeMeta&>(c10::intrusive_ptr<c10::S
torageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >&&, c10::DispatchKeySet&, caffe2::TypeMeta&) /home/user/pytorch/c10/util/intrusive_ptr.h:498
    ROCm#2 0x3ff76f79e17  (/home/user/pytorch/build/lib.linux-s390x-cpython-310/torch/lib/libtorch_cpu.so+0x2fb79e17)

SUMMARY: AddressSanitizer: heap-use-after-free /home/user/pytorch/c10/core/SymInt.h:154 in c10::SymInt::is_heap_allocated() const
Shadow bytes around the buggy address:
  0x100c2000027aa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ac0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c2000027af0: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100c2000027b20: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b30: 00 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa
  0x100c2000027b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1115867==ABORTING
```
</details>

<details>
<summary>Additional backtraces (not full)</summary>

Memory deallocation:
```
#0  operator delete (ptr=0x61000013d740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
ROCm#1  0x000003ffa77e3222 in c10::TensorImpl::~TensorImpl (this=0x61000013d740) at /home/user/pytorch/c10/core/TensorImpl.cpp:75
ROCm#2  0x000003ff63e76e8c in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:291
ROCm#3  0x000003ff63e76910 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:370
ROCm#4  0x000003ff63e67240 in at::TensorBase::~TensorBase (this=0x3ffd7ec8230) at /home/user/pytorch/aten/src/ATen/core/TensorBase.h:80
ROCm#5  0x000003ff63e85ee0 in at::Tensor::~Tensor (this=0x3ffd7ec8230) at aten/src/ATen/core/TensorBody.h:90
ROCm#6  0x000003ff63f67304 in resize__functionalization (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:173
ROCm#7  0x000003ff63f89258 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (
    this=0x6030000390a0, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
ROCm#8  c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (functor=0x6030000390a0, dispatchKeySet=..., args=..., args=...,
    args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
ROCm#9  0x000003ff6aca560a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff63f88a80 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>, functor=0x6030000390a0,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
ROCm#10 0x000003ff6aca715c in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1b28, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:96
ROCm#11 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
ROCm#12 0x000003ff6a82006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
ROCm#13 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
ROCm#14 0x000003ff861d5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
ROCm#15 0x000003ff861b579e in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:401
```

Memory access:
```
#0  c10::SymInt::maybe_as_int (this=0x61000013d790) at /home/user/pytorch/c10/core/SymInt.h:215
ROCm#1  0x000003ff734d0a6e in c10::SymInt::sym_eq (this=0x61000013d790, sci=...) at /home/user/pytorch/c10/core/SymInt.cpp:69
ROCm#2  0x000003ff5f6ab0be in c10::SymInt::operator== (this=0x61000013d790, o=...) at /home/user/pytorch/c10/core/SymInt.h:177
ROCm#3  0x000003ff5f6aaede in std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1162
ROCm#4  0x000003ff5f6aae4c in std::__equal_aux1<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1211
ROCm#5  0x000003ff5f6aae06 in std::__equal_aux<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1219
ROCm#6  0x000003ff5f6aad98 in std::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1556
ROCm#7  0x000003ff2ff3c772 in c10::ArrayRef<c10::SymInt>::equals (this=0x3ffed7c9900, RHS=...) at /home/user/pytorch/c10/util/ArrayRef.h:188
ROCm#8  0x000003ff31891bc2 in c10::operator!=<c10::SymInt> (a1=..., a2=...) at /home/user/pytorch/c10/util/ArrayRef.h:341
ROCm#9  0x000003ff51eb5800 in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:408
ROCm#10 0x000003ff51ee59c8 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c
10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
 > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (this=0x6030007dca40, args=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
ROCm#11 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt
>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<
c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
ROCm#12 0x000003ff369a512a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff51ee51f0 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::Ar
rayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKern
el*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>, functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
ROCm#13 0x000003ff369a6e90 in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1bc8, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
ROCm#14 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::Arr
ayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
ROCm#15 0x000003ff3652006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&,
c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
ROCm#16 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
ROCm#17 0x000003ff51ed5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
ROCm#18 0x000003ff51ebbb68 in torch::autograd::VariableType::(anonymous namespace)::resize_ (ks=..., self=..., size=..., optional_memory_format=...)
    at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:243
```
</details>
Pull Request resolved: pytorch#101064
Approved by: https://github.com/Skylion007, https://github.com/albanD
alugorey pushed a commit to alugorey/pytorch that referenced this pull request May 17, 2023
arguments() returns vector member of object returned by schema() call.
When object returned by schema() call is destroyed, the vector is deallocated as well,
it's lifetime isn't extended.

This issue detected while running `pytest -v test/mobile/test_lite_script_type.py -k test_nest_typing_namedtuple_custom_classtype` with ASAN.

<details>
<summary>ASAN output</summary>

```
==1134126==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0005a5790 at pc 0x03ff844488d8 bp 0x03fff584afe8 sp 0x03fff584afd8
READ of size 8 at 0x60d0005a5790 thread T0
    #0 0x3ff844488d7 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) /usr/lib/gcc/s390x-i
bm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028
    #1 0x3ff8444293f in std::vector<c10::Argument, std::allocator<c10::Argument> >::begin() const /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_vector.h:821
    #2 0x3ff84d807d1 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:617
    ROCm#3 0x3ff84d80305 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
    ROCm#4 0x3ff84856871 in pybind11::detail::type_caster<c10::IValue, void>::cast(c10::IValue, pybind11::return_value_policy, pybind11::handle) /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
    ROCm#5 0x3ff85318191 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const /home/user/pytorch/cmake/../third_party/pybin
d11/include/pybind11/pybind11.h:249
    ROCm#6 0x3ff85317cfd in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) /home/user/pytorch/cmake/../third_party/pybind11/incl
ude/pybind11/pybind11.h:224
    ROCm#7 0x3ff82ee52e9 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
    ROCm#8 0x3ffab002903 in cfunction_call Objects/methodobject.c:543
    ROCm#9 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    ROCm#10 0x3ffaaf8e919 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    ROCm#11 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#12 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#13 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#14 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#15 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#16 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#17 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#18 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#19 0x3ffaaf8a615 in _PyObject_FastCallDictTstate Objects/call.c:142
    ROCm#20 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#21 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    ROCm#22 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    ROCm#23 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    ROCm#24 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#25 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#26 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#27 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#28 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#29 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#30 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#31 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#32 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#33 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#34 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#35 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#36 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#37 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#38 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#39 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#40 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#41 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#42 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    ROCm#43 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#44 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#45 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#46 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#47 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#48 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#49 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#50 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#51 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#52 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#53 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#54 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#55 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#56 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#57 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#58 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#59 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#60 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#61 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#62 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#63 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#64 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#65 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#66 0x3ffaaf8ab9b in PyVectorcall_Call Objects/call.c:267
    ROCm#67 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#68 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    ROCm#69 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    ROCm#70 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#71 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#72 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#73 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#74 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    ROCm#75 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#76 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    ROCm#77 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    ROCm#78 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    ROCm#79 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#80 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#81 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#82 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#83 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#84 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#85 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#86 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#87 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#88 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    ROCm#89 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#90 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#91 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#92 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#93 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#94 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    ROCm#95 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    ROCm#96 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#97 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#98 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#99 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#100 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#101 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#102 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#103 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#104 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#105 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#106 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#107 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#108 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#109 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#110 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#111 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#112 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#113 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#114 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#115 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#116 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    ROCm#117 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#118 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    ROCm#119 0x3ffaaf8ad17 in _PyObject_Call Objects/call.c:305
    ROCm#120 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    ROCm#121 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    ROCm#122 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#123 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#124 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#125 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#126 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#127 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#128 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#129 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#130 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#131 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#132 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#133 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#134 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#135 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#136 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#137 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#138 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#139 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#140 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#141 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#142 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#143 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#144 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    ROCm#145 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    ROCm#146 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#147 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#148 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#149 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#150 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#151 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#152 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#153 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#154 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#155 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#156 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#157 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#158 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#159 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#160 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#161 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#162 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#163 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#164 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#165 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#166 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    ROCm#167 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    ROCm#168 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#169 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#170 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#171 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#172 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#173 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#174 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#175 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#176 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#177 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#178 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#179 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#180 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#181 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#182 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#183 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#184 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#185 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#186 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#187 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#188 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    ROCm#189 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#190 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    ROCm#191 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    ROCm#192 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    ROCm#193 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#194 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#195 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#196 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#197 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#198 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#199 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#200 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    ROCm#201 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    ROCm#202 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    ROCm#203 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#204 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#205 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#206 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#207 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#208 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#209 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#210 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#211 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#212 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#213 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#214 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#215 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    ROCm#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#217 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#218 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#219 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#220 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#221 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#222 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#223 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    ROCm#224 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#225 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    ROCm#226 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    ROCm#227 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    ROCm#228 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#229 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#230 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    ROCm#231 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#232 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#233 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#234 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#235 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#236 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#237 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#238 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#239 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#240 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#241 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#242 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#243 0x3ffab105447 in call_function Python/ceval.c:5891
    ROCm#244 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#245 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#246 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    ROCm#247 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#248 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    ROCm#249 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290

0x60d0005a5790 is located 80 bytes inside of 136-byte region [0x60d0005a5740,0x60d0005a57c8)
freed by thread T0 here:
    #0 0x3ffab537de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff55984fdb in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate(std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>*, unsigned long) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145

previously allocated by thread T0 here:
    #0 0x3ffab53734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff5598443f in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
    #2 0x3fff5849ecf  ([stack]+0xb2ecf)

SUMMARY: AddressSanitizer: heap-use-after-free /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&)
Shadow bytes around the buggy address:
  0x100c1a000b4aa0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa
  0x100c1a000b4ab0: fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c1a000b4ac0: fd fd fd fd fd fa fa fa fa fa fa fa fa fa fd fd
  0x100c1a000b4ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
  0x100c1a000b4ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c1a000b4af0: fd fd[fd]fd fd fd fd fd fd fa fa fa fa fa fa fa
  0x100c1a000b4b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1134126==ABORTING
```

Additional backtraces (not full):
Allocation:
```
#0  __memset_z196 () at ../sysdeps/s390/memset-z900.S:144
#1  0x000003ff96f3072a in __asan::Allocator::Allocate (this=this@entry=0x3ff97041eb8 <__asan::instance>, size=size@entry=136, alignment=8, alignment@entry=0, stack=<optimized out>,
    stack@entry=0x3ffdbb45d78, alloc_type=<optimized out>, can_fill=true) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:599
#2  0x000003ff96f2c088 in __asan::asan_memalign (alignment=alignment@entry=0, size=size@entry=136, stack=stack@entry=0x3ffdbb45d78, alloc_type=alloc_type@entry=__asan::FROM_NEW)
    at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:1039
ROCm#3  0x000003ff96fb73b0 in operator new (size=136) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
ROCm#4  0x000003ff41404440 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate (this=0x3ffdbb468c0,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
ROCm#5  0x000003ff414042a0 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::allocate (__a=...,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:464
ROCm#6  0x000003ff41403b66 in std::__allocate_guarded<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > > (__a=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:98
ROCm#7  0x000003ff4140372a in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47888, __p=@0x3ffdbb47880: 0x0, __a=..., __args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:648
ROCm#8  0x000003ff41403328 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1342
ROCm#9  0x000003ff41402f06 in std::shared_ptr<c10::FunctionSchema>::shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (
    this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:409
ROCm#10 0x000003ff41402b6e in std::allocate_shared<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__a=...,
    __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:862
ROCm#11 0x000003ff4140215c in std::make_shared<c10::FunctionSchema, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:878
ROCm#12 0x000003ff413d180c in c10::TupleType::createWithSpec<c10::basic_string_view<char> > (qualName=..., field_names=std::vector of length 1, capacity 1 = {...},
    field_types=std::vector of length 1, capacity 1 = {...}, field_defaults=std::vector of length 0, capacity 0) at /home/user/pytorch/aten/src/ATen/core/type.cpp:769
ROCm#13 0x000003ff413b9ca6 in c10::TupleType::createNamed (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...})
    at /home/user/pytorch/aten/src/ATen/core/type.cpp:725
ROCm#14 0x000003ff4115fbac in c10::ivalue::TupleTypeFactory<c10::TupleType>::fallback (type=...) at /home/user/pytorch/aten/src/ATen/core/dynamic_type.cpp:383
ROCm#15 0x000003ff708217fe in c10::ivalue::Tuple::type<c10::TupleType> (this=0x6080004b8520) at /home/user/pytorch/aten/src/ATen/core/ivalue_inl.h:781
ROCm#16 0x000003ff70800740 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
ROCm#17 0x000003ff70800306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
ROCm#18 0x000003ff702d6872 in pybind11::detail::type_caster<c10::IValue, void>::cast (src=...) at /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
ROCm#19 0x000003ff70d98192 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const (this=0x3ffdbb4ca20, call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:249
ROCm#20 0x000003ff70d97cfe in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) (call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:224
ROCm#21 0x000003ff6e9652ea in pybind11::cpp_function::dispatcher (self=<PyCapsule at remote 0x3ff83e27720>,
    args_in=(<torch._C.LiteScriptModule at remote 0x3ff811844b0>, (<Tensor at remote 0x3ff814efb00>,)), kwargs_in=0x0) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
```

Deallocation:
```
#0  operator delete (ptr=0x60d0005a5740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ff44904fdc in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate (this=0x3ffc5dc8020,
    __p=0x60d0005a5740, __t=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145
#2  0x000003ff44904fa8 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::deallocate (
    __a=..., __p=0x60d0005a5740, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:496
ROCm#3  0x000003ff449041f2 in std::__allocated_ptr<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::~__allocated_ptr (
    this=0x3ffc5dc8030) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:74
ROCm#4  0x000003ff44904888 in std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>::_M_destroy (this=0x60d0005a5740)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:538
ROCm#5  0x000003ff43895a62 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:184
ROCm#6  0x000003ff43895420 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x611000c40648) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
ROCm#7  0x000003ff4466e7f4 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x611000c40640)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
ROCm#8  0x000003ff4466d820 in std::shared_ptr<c10::FunctionSchema>::~shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
ROCm#9  0x000003ff448d82f6 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
ROCm#10 0x000003ff448d8346 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
ROCm#11 0x000003ff731296a4 in std::_Sp_counted_ptr<c10::TupleType*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x603000c43ae0)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:348
ROCm#12 0x000003ff71eaf666 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:168
ROCm#13 0x000003ff71eaf330 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x3ffc5dc9368) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
ROCm#14 0x000003ff73129ee4 in std::__shared_ptr<c10::TupleType, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x3ffc5dc9360)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
ROCm#15 0x000003ff73122390 in std::shared_ptr<c10::TupleType>::~shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
ROCm#16 0x000003ff73d00788 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
ROCm#17 0x000003ff73d00306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
```
</details>
Pull Request resolved: pytorch#101400
Approved by: https://github.com/zou3519
lcskrishna pushed a commit to lcskrishna/pytorch that referenced this pull request May 29, 2023
3 disabled functions are attempting out of bounds reads. Disable them until sleef library is fixed.

<details>
<summary>ASAN report</summary>

```
=================================================================
==2030580==ERROR: AddressSanitizer: global-buffer-overflow on address 0x03ff70f54570 at pc 0x03ff6704e960 bp 0x03ffce128940 sp 0x03ffce128930
READ of size 4 at 0x03ff70f54570 thread T0
    #0 0x3ff6704e95f in vgather_vf_p_vi2 /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129
    ROCm#1 0x3ff6704e95f in rempif /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:550
    ROCm#2 0x3ff6704e95f in Sleef_cosf4_u10vxe2 /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:1021
    ROCm#3 0x3ff67029cfb in Sleef_cosf4_u10 /home/user/pytorch/build/sleef/src/libm/disps390x_128.c:182
    ROCm#4 0x3ff55d21941 in at::vec::ZVECTOR::Vectorized<float, void> at::vec::ZVECTOR::Vectorized<float, void>::mapSleef<float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __
vector(2)), float, 0>(float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __vector(2))) const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:991
    ROCm#5 0x3ff5689ad01 in at::vec::ZVECTOR::Vectorized<float, void>::cos() const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:1074
    ROCm#6 0x3ff5685df97 in at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1}::operator()(at::vec::ZVECTOR::Vectorized<float, void>) const /home/
user/pytorch/aten/src/ATen/cpu/vml.h:71
    ROCm#7 0x3ff5689b691 in void at::vec::map<float, at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1}, 0>(at::vml::ZVECTOR::vcos<float>(float*,
float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1} const&, float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vec/functional_base.h:239
    ROCm#8 0x3ff5685e0df in void at::vml::ZVECTOR::vcos<float>(float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vml.h:71
    ROCm#9 0x3ff563fdde3 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    ROCm#10 0x3ff5648e4a3 in operator() /home/user/pytorch/aten/src/ATen/TensorIterator.h:406
    ROCm#11 0x3ff5663cae1 in callback_fn<at::TensorIteratorBase::loop_2d_from_1d<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> >(c
onst at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)>&)::<lambda(char**, const int64_t*, int64_t, int64_t)> > /home/user/pytorch/
c10/util/FunctionRef.h:43
    ROCm#12 0x3ff4d45a933 in c10::function_ref<void (char**, long const*, long, long)>::operator()(char**, long const*, long, long) const /home/user/pytorch/c10/util/FunctionRef.h:64
    ROCm#13 0x3ff4d455133 in at::internal::serial_for_each(c10::ArrayRef<long>, c10::ArrayRef<long>, char**, unsigned long, c10::function_ref<void (char**, long const*, long, long)>, at::Range) /home/user/pyt
orch/aten/src/ATen/TensorIteratorInternal.h:52
    ROCm#14 0x3ff4d43b703 in at::TensorIteratorBase::serial_for_each(c10::function_ref<void (char**, long const*, long, long)>, at::Range) const /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:777
    ROCm#15 0x3ff4d43ab59 in at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long) /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:749
    ROCm#16 0x3ff5648e851 in for_each<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> > /home/user/pytorch/aten/src/ATen/TensorItera
tor.h:421
    ROCm#17 0x3ff563fe5f9 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    ROCm#18 0x3ff56400915 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    ROCm#19 0x3ff56400f1d in at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&) /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    ROCm#20 0x3ff4f303007 in void at::native::DispatchStub<void (*)(at::TensorIteratorBase&), at::native::cos_stub>::operator()<at::native::structured_cos_out&>(c10::DeviceType, at::native::structured_cos_out
&) /home/user/pytorch/aten/src/ATen/native/DispatchStub.h:158
    ROCm#21 0x3ff4f2edb3f in at::native::structured_cos_out::impl(at::Tensor const&, at::Tensor const&) /home/user/pytorch/aten/src/ATen/native/UnaryOps.cpp:330
    ROCm#22 0x3ff526ef739 in wrapper_CPU_cos /home/user/pytorch/build/aten/src/ATen/RegisterCPU.cpp:4307
    ROCm#23 0x3ff52c651d9 in operator() /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
    ROCm#24 0x3ff52c651d9 in call /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:463
    ROCm#25 0x3ff5076df2f in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) /home/user/pytorch/aten/src/ATen/core
/boxing/KernelFunction_impl.h:50
    ROCm#26 0x3ff5009a93f in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core
/boxing/KernelFunction_impl.h:103
    ROCm#27 0x3ff5009a93f in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)> const&, at::Tensor const&) const /home/user/pytorch/aten/s
rc/ATen/core/dispatch/Dispatcher.h:639
    ROCm#28 0x3ff5009a93f in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)>::call(at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
    ROCm#29 0x3ff5009a93f in at::_ops::cos::call(at::Tensor const&) /home/user/pytorch/build/aten/src/ATen/Operators_0.cpp:2215
    ROCm#30 0x3ff7d813741 in at::Tensor::cos() const /home/user/pytorch/build/aten/src/ATen/core/TensorBody.h:2107
    ROCm#31 0x3ff7dc0f2b7 in operator() /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2953
    ROCm#32 0x3ff7dc0faf7 in THPVariable_cos /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2955
    ROCm#33 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    ROCm#34 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    ROCm#35 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#36 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    ROCm#37 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#38 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#39 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#40 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#41 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    ROCm#42 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#43 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#44 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/
torch/csrc/utils/python_dispatch.cpp:175
    ROCm#45 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch::
PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::operator()(c10::OperatorKernel*, c10::Op
eratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87
    ROCm#46 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch::
PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::_FUN(c10::OperatorKernel*, c10::Operator
Handle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86
    ROCm#47 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/b
oxing/BoxedKernel_impl.h:41
    ROCm#48 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/cor
e/boxing/KernelFunction_impl.h:43
    ROCm#49 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:6
91
    ROCm#50 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417
    ROCm#51 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421
    ROCm#52 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15
    ROCm#53 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c1
0::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61
    ROCm#54 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::
IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111
    ROCm#55 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290
    ROCm#56 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-lin
ux-gnu/11/include/g++-v11/bits/std_function.h:590
    ROCm#57 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41
    ROCm#58 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11::
kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764
    ROCm#59 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829
    ROCm#60 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
    ROCm#61 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::vo
id_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
    ROCm#62 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /h
ome/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
    ROCm#63 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
    ROCm#64 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
    ROCm#65 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
    ROCm#66 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    ROCm#67 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    ROCm#68 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#69 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    ROCm#70 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#71 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#72 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#73 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#74 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    ROCm#75 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#76 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    ROCm#77 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    ROCm#78 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#79 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#80 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#81 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#82 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#83 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#84 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#85 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#86 0x3ffa5feb289 in call_function Python/ceval.c:5891
    ROCm#87 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#88 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#89 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#90 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#91 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    ROCm#92 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#93 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#94 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#95 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#96 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#97 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#98 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#99 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    ROCm#100 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#101 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#102 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch
/torch/csrc/utils/python_dispatch.cpp:175
    ROCm#103 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:
:PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::operator()(c10::OperatorKernel*, c10::O
peratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87
    ROCm#104 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:
:PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::_FUN(c10::OperatorKernel*, c10::Operato
rHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86
    ROCm#105 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/
boxing/BoxedKernel_impl.h:41
    ROCm#106 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/co
re/boxing/KernelFunction_impl.h:43
    ROCm#107 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:
691
    ROCm#108 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417
    ROCm#109 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421
    ROCm#110 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15
    ROCm#111 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c
10::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61
    ROCm#112 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10:
:IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111
    ROCm#113 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290
    ROCm#114 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-li
nux-gnu/11/include/g++-v11/bits/std_function.h:590
    ROCm#115 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41
    ROCm#116 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11:
:kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764
    ROCm#117 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
 pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829
    ROCm#118 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
    ROCm#119 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v
oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
    ROCm#120 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /
home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
    ROCm#121 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
    ROCm#122 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
    ROCm#123 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
    ROCm#124 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    ROCm#125 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    ROCm#126 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#127 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    ROCm#128 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#129 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#130 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#131 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#132 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    ROCm#133 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#134 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    ROCm#135 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    ROCm#136 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#137 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#138 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#139 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#140 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#141 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#142 0x3ffa5e87d2b in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#143 0x3ffa5e882dd in method_vectorcall Objects/classobject.c:83
    ROCm#144 0x3ffa5e836d3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#145 0x3ffa5e84b6f in _PyObject_CallFunctionVa Objects/call.c:485
    ROCm#146 0x3ffa5e84f2d in callmethod Objects/call.c:557
    ROCm#147 0x3ffa5e85039 in PyObject_CallMethod Objects/call.c:577
    ROCm#148 0x3ff7f7efa05 in torch::handle_torch_function_no_python_arg_parser(c10::ArrayRef<pybind11::handle>, _object*, _object*, char const*, _object*, char const*, torch::TorchFunctionName) /home/user/py
torch/torch/csrc/utils/python_arg_parser.cpp:338
    ROCm#149 0x3ff7eb09b67 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
 pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:827
    ROCm#150 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
    ROCm#151 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v
oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
    ROCm#152 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /
home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
    ROCm#153 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
    ROCm#154 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
    ROCm#155 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
    ROCm#156 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    ROCm#157 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    ROCm#158 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#159 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    ROCm#160 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#161 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#162 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#163 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#164 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    ROCm#165 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#166 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    ROCm#167 0x3ffa5e84027 in _PyObject_MakeTpCall Objects/call.c:215
    ROCm#168 0x3ffa5fd767b in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    ROCm#169 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#170 0x3ffa5feb289 in call_function Python/ceval.c:5891
    ROCm#171 0x3ffa5fe5ad1 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    ROCm#172 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#173 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#174 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#175 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#176 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#177 0x3ffa5feb289 in call_function Python/ceval.c:5891
    ROCm#178 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213
    ROCm#179 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#180 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#181 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#182 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    ROCm#183 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#184 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#185 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#186 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#187 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#188 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#189 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#190 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    ROCm#191 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#192 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#193 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#194 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#195 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#196 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#197 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#198 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    ROCm#199 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#200 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#201 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#202 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#203 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#204 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#205 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#206 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    ROCm#207 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#208 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#209 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#210 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#211 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#212 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#213 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#214 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    ROCm#215 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    ROCm#216 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    ROCm#217 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    ROCm#218 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#219 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#220 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#221 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#222 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#223 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#224 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    ROCm#225 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    ROCm#226 0x3ffa5feb289 in call_function Python/ceval.c:5891
    ROCm#227 0x3ffa5fe5b21 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    ROCm#228 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#229 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#230 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#231 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    ROCm#232 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#233 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#234 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#235 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#236 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#237 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#238 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#239 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    ROCm#240 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#241 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#242 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#243 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#244 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#245 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#246 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#247 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    ROCm#248 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    ROCm#249 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    ROCm#250 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    ROCm#251 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    ROCm#252 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    ROCm#253 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    ROCm#254 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    ROCm#255 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267

0x03ff70f54570 is located 0 bytes to the right of global variable 'Sleef_rempitabsp' defined in '/home/user/pytorch/third_party/sleef/src/libm/rempitab.c:986:34' (0x3ff70f53f00) of size 1648
SUMMARY: AddressSanitizer: global-buffer-overflow /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 in vgather_vf_p_vi2
Shadow bytes around the buggy address:
  0x10007fee1ea850: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea860: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea870: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea890: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x10007fee1ea8a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[f9]f9
  0x10007fee1ea8b0: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==2030580==ABORTING
```
</details>

It reproduces when running `pytest -v test/test_ops.py -k test_python_ref__refs_cos_cpu_bfloat16` under address sanitizer on s390x.

See also: shibatch/sleef#464

Pull Request resolved: pytorch#102266
Approved by: https://github.com/malfet
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.