Skip to content

Conversation

csarofeen
Copy link
Owner

Combination of:
#12
#14
rebased on master.

ezyang and others added 30 commits April 9, 2020 14:56
…pytorch#36222)

Summary:
Pull Request resolved: pytorch#36222

Reland of pytorch#35706, with fixes to code analyzer.

It is extremely common to define implementations of operators at a
specific dispatch key, so we add an overload to impl specifically for
this case.  I then delete most uses of torch::dispatch

dispatch_autograd call sites can't make use of this overload.  So
instead the new preferred way to specify something as autograd is to
pass kAutograd as the dispatch key (short form, analogous to kCPU/kCUDA
which we support today).

I flip flopped about whether or not kAutograd should have the type
DispatchKey or some other type (to help better encapsulate the
DispatchKey enum); this is more direct and I can't think of any
BC problems from this usage.

Some other reorganization I did:
- I renamed all of the worker functions in op_registration to have
  a leading underscore and made them private, just to make it more
  clear what the public versus private API were (the private API
  shouldn't be used by users because it doesn't come with && overloads)
  Note that this means I needed to adjust the regex in the
  code analyzer, because
- In a few places where I was touching lines already, I replaced
  full DispatchKey typed out enums with shorter kFoo names, similar
  to kAutograd but I didn't publish these globally.
- Code analyzer now prints a unified diff, and in the other order
  (because I tend to think of the diff as reporting how the /new/ result
  is different)

Signed-off-by: Edward Z. Yang <[email protected]>

Test Plan: Imported from OSS

Differential Revision: D20929256

Pulled By: ezyang

fbshipit-source-id: c69b803d2b3a1a8aff70e14da33d3adec5239f13
…s. (pytorch#36223)

Summary:
Pull Request resolved: pytorch#36223

Previously pytorch#35714

There are a lot of unboxed only defs.  We're committed to removing
them at the end of the half but as I am about to do a lot of porting
to the new API, let's get them into a form where they're easy to
remove.  This is a new overload impl_UNBOXED that will pass
the function pointer straight to CppFunction::makeUnboxedOnly

I don't attempt to make the _UNBOXED API complete; in particular,
catchall declarations don't get this sugar (as there are very few
of them).

To get some coverage of _UNBOXED API for code analysis, I switched
one of our unboxed tests to be an impl rather than a def.  This
shouldn't materially affect coverage.

Signed-off-by: Edward Z. Yang <[email protected]>

Test Plan: Imported from OSS

Differential Revision: D20929259

Pulled By: ezyang

fbshipit-source-id: 72d2061b6c8a6afbcd392b47f53ade18de2f9184
Summary:
Pull Request resolved: pytorch#36319

On the way to resolving pytorch#35216.
This is a fix for just the master branch but once this goes in,
I'll send a cherry-pick to release/1.5

The problem is that we were not calling `format` on a string that had
templates (e.g., '{input}', '{dim}'). This change makes it so that we
call format on the entire docstring for `torch.min`.

Test Plan:
- The `torch.max` docs are OK:
https://pytorch.org/docs/master/torch.html#torch.max and don't need
changing.
- `torch.min` docs, before this change: see second screenshot in pytorch#35216.
- after this change: <Insert link here on github>

![image](https://user-images.githubusercontent.com/5652049/78921702-4e2acc00-7a63-11ea-9ea0-89636ff6fb0a.png)

Differential Revision: D20946702

Pulled By: zou3519

fbshipit-source-id: a1a28707e41136a9bb170c8a4191786cf037a0c2
Summary:
Pull Request resolved: pytorch#36019

Once the autograd engine is finished with a GraphTask it would call
`markCompleted` on the Future. This could trigger callbacks on the Future that
could throw exceptions.

If one of the callbacks did throw an exception, we would call setErrorIfNeeded,
which would be no-op since the Future is already marked as completed. This
would effectively mean we would be swallowing exceptions. To avoid this, we do
the following:

1) Rethrow the exception in `mark_graph_task_completed`.
2) In `setErrorIfNeeded`, log the error if we are ignoring it.
ghstack-source-id: 101607329

Test Plan: Verified appropriate logging.

Differential Revision: D20854806

fbshipit-source-id: 76bdf403cfd6d92f730ca1483ad5dba355f83e58
…pytorch#36324)

Summary: Pull Request resolved: pytorch#36324

Test Plan:
.

Imported from OSS

Differential Revision: D20948046

fbshipit-source-id: 2dd8f0c6fbe8fd84293420b97592dc586d25def9
…rch#36280)

Summary: Pull Request resolved: pytorch#36280

Test Plan:
.

Imported from OSS

Differential Revision: D20948352

fbshipit-source-id: 92188806b9c129458ebb2cdc47599427e3b6e216
…h#35279)

Summary:
Pull Request resolved: pytorch#35279

Supported benchmarking pytorch jit self-contained models.
* By specifying flag `--no_inputs=True`, the binary supports benchmarking self-contained torchscript model (model runs without inputs, `model.forward()`)
* This allows moving data preparation part outside of this binary.

Reviewed By: kimishpatel

Differential Revision: D20585639

fbshipit-source-id: c28e50503534c90023c1430479d26f1c1ce740b1
Summary: Pull Request resolved: pytorch#35370

Test Plan: Imported from OSS

Differential Revision: D20641812

Pulled By: z-a-f

fbshipit-source-id: 42bb1ed96d6b6e0a5da6e693d02ff616c33d9ef6
Summary:
Reviving this PR pytorch#35401 eellison. I believe after the profiled graph executor fix the test failures are handled.
Pull Request resolved: pytorch#36243

Differential Revision: D20950623

Pulled By: eellison

fbshipit-source-id: 5fbee426d1a098d84d5938540d45ce00828299be
Summary:
AnyType wasn't listed as a mutable type, so the assertion triggered (yay!). Also update the `isMutableTypeInternal(from) != isMutableTypeInternal` logic to be more encompassing.
Pull Request resolved: pytorch#36178

Differential Revision: D20922356

Pulled By: eellison

fbshipit-source-id: 7060a62b18e98dc24b6004a66225c196aadb566e
Summary:
Pull Request resolved: pytorch#36354

TestQuantizeScript is splitted to
- TestQuantizeScriptJitPasses
- TestQuantizeScriptPTSQOps (post training static quantization ops)

Test Plan:
.

Imported from OSS

Differential Revision: D20956731

fbshipit-source-id: 860cd24ea3f49450126ce2d872894492bdc822d8
Summary:
This PR addresses Issue pytorch#36279.
Previously, printing of complex tensors would sometimes yield extra spaces before the elements as shown below:
```
print(torch.tensor([[1 + 1.340j, 3 + 4j], [1.2 + 1.340j, 6.5 + 7j]], dtype=torch.complex64))
```
would yield
```
tensor([[(1.0000 + 1.3400j),
         (3.0000 + 4.0000j)],
        [(1.2000 + 1.3400j),
         (6.5000 + 7.0000j)]], dtype=torch.complex64)
```
This occurs primarily because when the max width for the element is being assigned, the formatter's max_width is calculated prior to truncating the float values. As a result, ```self.max_width``` would end up being much longer than the final length of the element string to be printed.

I address this by adding a boolean variable that checks if a complex tensor contains only ints and change the control flow for calculating ```self.max_width``` accordingly.

Here are some sample outputs of both float and complex tensors:

```
tensor([[0., 0.],
        [0., 0.]], dtype=torch.float64)

tensor([[(0.+0.j), (0.+0.j)],
        [(0.+0.j), (0.+0.j)]], dtype=torch.complex64)

tensor([1.2000, 1.3400], dtype=torch.float64)

tensor([(1.2000+1.3400j)], dtype=torch.complex64)

tensor([[(1.0000+1.3400j), (3.0000+4.0000j)],
        [(1.2000+1.3400j), (6.5000+7.0000j)]], dtype=torch.complex64)

tensor([1.0000, 2.0000, 3.0000, 4.5000])

tensor([(1.+2.j)], dtype=torch.complex64)
```

cc ezyang anjali411 dylanbespalko
Pull Request resolved: pytorch#36331

Differential Revision: D20955663

Pulled By: anjali411

fbshipit-source-id: c26a651eb5c9db6fcc315ad8d5c1bd9f4b4708f7
…pytorch#35997)

Summary:
Pull Request resolved: pytorch#35997

When the number of blocks is large enough, we are already achieving
blalanced SM allocation. But we still should keep the number of inputs
per thread large, because thread reduce is cheap.

Benchmark for Half on V100:
https://github.com/zasdfgbnm/things/blob/master/2020Q2/reduction-benchmark.ipynb

On large tensor, it is: 1.37ms vs 1.25ms

Test Plan: Imported from OSS

Differential Revision: D20927533

Pulled By: ngimel

fbshipit-source-id: 40df52e439cc1c01cda66c6195b600f301c5e984
…future (pytorch#36014)

Summary:
Pull Request resolved: pytorch#36014

Benchmark on RTX2080Ti: 2.13ms vs 1.88ms
https://github.com/zasdfgbnm/things/blob/master/2020Q2/reduction-benchmark-refactor.ipynb

Test Plan: Imported from OSS

Differential Revision: D20927535

Pulled By: ngimel

fbshipit-source-id: b65b749b58cebe0751e4ec7e1cf359543c401580
Summary: Pull Request resolved: pytorch#36352

Test Plan: Imported from OSS

Differential Revision: D20955781

Pulled By: z-a-f

fbshipit-source-id: 37fbcf329a6abcd9a367a73ad65ce543ed9ffe47
Summary:
GitHub commits:

facebook/fbthrift@54dc44b
facebook/litho@726a62c
facebook/proxygen@9557561
facebook/rocksdb@66a95f0
pytorch/FBGEMM@f1d3330

Test Plan: n/a

Reviewed By: zpao

fbshipit-source-id: 3585db200a5d84ba01b8cf90a3bd7bf003314345
Summary:
Pull Request resolved: pytorch#36093

Unwrap any tuples (including NamedTuples) in the module forward
function input list to be arglist.
1. Supports multiple tuple inputs, and traces their use through CallMethods and
TupleIndex
2. Does not unwrap inner use of other tuples that did not show up in the
original toplevel graph inputs

We work from the ScriptModule level instead of the Graph level because:
1. If the ScriptModule was previously called with the original set of inputs, the GraphExecutor caches the ExecutionPlan (specifically, ArgumentSpecCreator is derived from the Graph and type check the inputs passed in)
2. Since we are changing this graph's inputs, we clone the module and clear the GraphExecutor.

Since we work from ScriptModule level, we cannot take advantage of jit level syntactic sugar like run_pass(), so I jit exposed this as a cpp extension. Let me know if there are other ideas about this.

Test Plan:
buck test caffe2/torch/fb/model_transform:signature_translation_test
Todo: Verify use in bento

Untranslated graph:
```
> graph(%self : __torch__.test_jit.SparseNNWrapper,
>       %inputs.1 : NamedTuple(dense : Tensor, sparse : Dict(int, Tensor))):
>   %2 : __torch__.test_jit.SparseNN = prim::GetAttr[name="main_module"](%self)
>   %4 : Tensor = prim::CallMethod[name="forward"](%2, %inputs.1) # /data/users/ansha/fbsource/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/test_jit.py:12141:23
>   return (%4)
```

Translated graph:
```
> graph(%self : __torch__.test_jit.___torch_mangle_1.SparseNNWrapper,
>       %inputs.1_0 : Tensor,
>       %inputs.1_1 : Dict(int, Tensor)):
>   %2 : __torch__.test_jit.___torch_mangle_2.SparseNN = prim::GetAttr[name="main_module"](%self)
>   %3 : Tensor = prim::CallMethod[name="forward"](%2, %inputs.1_0, %inputs.1_1) # /data/users/ansha/fbsource/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/test_jit.py:12141:23
>   return (%3)
```

Reviewed By: houseroad

Differential Revision: D20313673

fbshipit-source-id: fddd07c9537dc8b6f480a14d697bea10ecc74470
Summary:
This was left off of pytorch#35741, but the max supported file format change
has been landed for several weeks, so this should be fine to land.
](https://our.intern.facebook.com/intern/diff/20875051/)
Pull Request resolved: pytorch#36085

Pulled By: driazati

Reviewed By: eellison

Differential Revision: D20875051

fbshipit-source-id: c3b84c85d791cb6f286a2ed38ca5cd1219b332b2
…6297)

Summary:
Pull Request resolved: pytorch#36297

Pull Request resolved: pytorch/FBGEMM#343

We moved FakeFP16 back to close source and kept `RoundToFloat16` function in "fbgemm/FbgemmConvert.h".

This is because FakeFP16 introduced dependency on MKL in the FBGEMM core. Also it doesn't seem to be needed for open source, as it is not used anywhere.

Test Plan: CI

Reviewed By: jspark1105

Differential Revision: D20937962

fbshipit-source-id: 9487a9fd2282b6df2f754c22bea36f2255a5c791
Summary:
GitHub commits:

pytorch/FBGEMM@20b6cf1

Test Plan: n/a

Reviewed By: zpao

fbshipit-source-id: f5a64d847512d25a138f666d25e03c58277327f0
…36366)

Summary:
Pull Request resolved: pytorch#36366

fix issues introduced in D20528758

Reviewed By: linbinyu

Differential Revision: D20959367

fbshipit-source-id: 920055a6782b9c6729177f7101f2f9eb3e40ebf8
…tive build (pytorch#35426)

Summary:
Pull Request resolved: pytorch#35426

Use selective build with the full set of operators (vs. manually register each used op with "_" prefix).

Lite interpreter relies on JIT operator dispatch. In future we still need JIT operator dispatch dispatch ops that are not registered in c10.
Currently the selective build is for c10/aten dispatch in BUCK. There is JIT selective code-gen in OSS but not ported to BUCK yet.
This diff is also porting the selective code-gen in BUCK.
* The selected op list is passed to gen_jit_dispatch.py.
* The list passed to gen_jit_dispatch is the top-level ops (USED_PT_OPS) only, because the selective c10/aten dispatch already registered other ops that are called from the top-level ops.

ghstack-source-id: 101885215

(Note: this ignores all push blocking failures!)

Test Plan:
1. In Python, run torch.jit.export_opnames(scripted_M_mod)
2. Append the operator names into fbcode/caffe2/pt_ops.bzl and the BUCK target.
3. Run
```
buck run xplat/caffe2/fb/lite_predictor:lite_predictor_bi -- --model=/home/myuan/temp/bi_pytext_0315.bc --input_dims "1,4" --input_type int64 --pytext_len=4
```
Should provide expected results.
In addition, the size of the generated code for JIT registration, for example, ```register_aten_ops_0.cpp```, should be significantly reduced (from ~250 KB to ~80KB). The non-selected op registration schema are still kept, but the registration functor is replaced by ```DUMMY_OPERATION```

Reviewed By: ljk53

Differential Revision: D20408831

fbshipit-source-id: ec75dd762c4613aeda3b2094f5dad11804dc9492
Summary:
Benchmark: (Debian 10, Release build, gcc 8.3, no turbo, Intel(R)
Xeon(R) E-2136 CPU @ 3.30GHz)

```python
import timeit
for op in ('gt', 'lt', 'ge', 'le', 'eq', 'ne'):
    for dtype in ('torch.float', 'torch.double', 'torch.int16',
'torch.int32', 'torch.int64'):
        for n, t in [(10_000, 100000),
                    (100_000, 10000)]:
            print(f'a.{op}_(b), numel() == {n} for {t} times,
dtype={dtype}')
            print(timeit.timeit(f'a.{op}_(b)', setup=f'import torch; a =
torch.arange(1, {n}, dtype={dtype}); b = torch.arange({n}, 1, -1,
dtype={dtype})', number=t))
```

Before:

```
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.778998922000028
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.6359690249992127
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.double
1.0801493119997758
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.9360321379990637
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.7341018620008981
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.6345281440007966
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.7396387640001194
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.6429641230006382
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.7759611700003006
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.6672059659995284
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.7724312530008319
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.6392585769990546
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.double
0.7917451840003196
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.6455550159989798
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.739991647998977
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.6572993859990675
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.7627949479992822
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.6476544910001394
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.7965036850000615
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.6780715599998075
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.7653547080008138
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.6383065829995758
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.double
0.7895260240002244
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.6508346030004759
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.7409299750015634
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.6383492870008922
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.7620547579990671
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.6474270239996258
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.8070051169997896
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.6712598600006459
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.7627660060006747
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.6406353189995571
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.double
1.0826010620003217
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.9391552950000914
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.7427801039993938
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.6365172640016681
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.7679271510005492
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.6453389289999905
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.788032889000533
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.6708840760002204
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.float
1.078837263999958
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.9397531720005645
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.double
1.1031508050000411
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.9412319389994082
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.7509566959997755
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.638570957000411
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.7592877549996047
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.6458840529994632
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.7984061539991671
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.6776346309998189
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.7724407899986545
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.6581534130000364
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.double
0.8303323249983805
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.6954390920000151
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.745512373998281
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.6360954970004968
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.7569978400006221
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.6450422030011396
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.7889118379989668
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.6693385389989999
```

After:

```
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.2444220920006046
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.2031730359994981
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.double
0.35491806199934217
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.3905606850003096
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.16665379499863775
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.10095906300011848
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.21650469999985944
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.18737469400002738
a.gt_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.35481256200000644
a.gt_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.36696120199849247
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.21976138800164335
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.20275393200063263
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.double
0.3695997209997586
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.39441510399956314
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.15657078300137073
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.0992998069996247
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.20425128799979575
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.20352934599941364
a.lt_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.35883567900054913
a.lt_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.39059587599876977
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.21457727400047588
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.18836135499986995
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.double
0.35971907199927955
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.3688875009993353
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.1576009280015569
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.09524034199966991
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.2064543649994448
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.18726435600001423
a.ge_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.35351785300008487
a.ge_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.3680737989998306
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.2132134399998904
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.2140274829998816
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.double
0.36539215199991304
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.39128020300086064
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.15712150600120367
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.10149904400168452
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.2103407699996751
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.2134442910009966
a.le_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.35387034300038067
a.le_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.38917528399906587
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.2190484450002259
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.2030815980015177
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.double
0.3710030169986567
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.36419657899932645
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.15986497499943653
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.10145393699895067
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.21011781599918322
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.20121852699958254
a.eq_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.36681504499938455
a.eq_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.364472848999867
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.float
0.2290963309988001
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.float
0.21674784300012107
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.double
0.3829616689999966
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.double
0.39437660300063726
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.int16
0.1661020749997988
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.int16
0.10052955100036343
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.int32
0.21827425599985872
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.int32
0.21522501399886096
a.ne_(b), numel() == 10000 for 100000 times, dtype=torch.int64
0.37058242300008715
a.ne_(b), numel() == 100000 for 10000 times, dtype=torch.int64
0.39304063900090114
```
Pull Request resolved: pytorch#35117

Differential Revision: D20721181

Pulled By: pbelevich

fbshipit-source-id: 4e38cc0f42393483db91b2dc53ffc507f91ed904
Hao Lu and others added 21 commits April 13, 2020 21:41
Summary:
Pull Request resolved: pytorch#36468

Since `caffe2::Tensor` is now refcounted, enabling copy constructor and the copy assignment operator should be fine.

Test Plan:
```
buck test mode/dev //caffe2/caffe2:caffe2_test_cpu -- TensorTest
```

AI/AF canaries with changes up to D20959214:

https://our.intern.facebook.com/intern/experiment_store/experiment/3298538636995/#commit1-commit2
https://our.intern.facebook.com/intern/experiment_store/experiment/2199027015376/#commit1-commit2

AI/AF canaries on this diff:
https://our.intern.facebook.com/intern/ads/canary/425960191574068914/
https://our.intern.facebook.com/intern/ads/canary/425960179835413033/

Reviewed By: yinghai

Differential Revision: D20985924

fbshipit-source-id: ead5f5ceff23d0adc06d598128de16a5533d767b
pytorch#36526)

Summary: Pull Request resolved: pytorch#36526

Test Plan: Imported from OSS

Differential Revision: D21004740

Pulled By: ZolotukhinM

fbshipit-source-id: 8ac8db0d4e31065e4fbd3e0cc27f15a15dcb141c
Summary:
Pull Request resolved: pytorch#36277

This PR introduce a flag to the tracer that guard the risky behaviors
like adding list/dict as output of the tracer. Currently to ensure not
BC breaking user, we throw warning if the tracer output is list, and
will throw error when the tracer output is dict to enforce using this
flag (next PR)

Test Plan: Imported from OSS

Differential Revision: D20998157

Pulled By: wanchaol

fbshipit-source-id: 0d2c55f1a263a48b1b92dd6ad54407815e0a6f72
Test Plan: revert-hammer

Differential Revision:
D20936595

Original commit changeset: c2f3cc567761

fbshipit-source-id: 1fcaa0484377e1580c08cd89fd0fcbdeb3f73f11
Summary:
Pull Request resolved: pytorch#36529

DistAutogradContainer is a singleton for the entire process and has a
single lock that protects access to map keyed by context_id. Performance
profiling showed that this lock is a potential bottleneck for training. As a
result, in this PR, we have the following optimizations:

1) Shard the map into 256 buckets with each bucket having its own lock. This
would ensure we hold much finer grained locks.
2) sendReleaseContextRpc was being called under a lock, moved this to be
outside the lock.
ghstack-source-id: 102085139

Test Plan: waitforbuildbot

Differential Revision: D21003934

fbshipit-source-id: 55f80dd317311bce0efd3ca8ca617d071297b5dc
Summary:
This kernel debug flag should help locate the issues we are observing on
some of the CI nodes.
Pull Request resolved: pytorch#36521

Differential Revision: D21010612

Pulled By: ezyang

fbshipit-source-id: d746e4eb0af832e770d2231bfee4154b6e703c19
…ch#36388)

Summary:
Pull Request resolved: pytorch#36388

(This wants to make me barf).

Signed-off-by: Edward Z. Yang <[email protected]>

Test Plan: Imported from OSS

Differential Revision: D20964195

Pulled By: ezyang

fbshipit-source-id: 3699a02b16060d79dae9890bafeaafad9ad9ae60
Summary:
Pull Request resolved: pytorch#36240

It's annoying, historical, and unnecessary (enum class is already
namespaced).  I did this codemod with:

```
git grep -l 'CPUTensorId' | xargs sed -i 's/CPUTensorId/CPU/g'
git grep -l 'CUDATensorId' | xargs sed -i 's/CUDATensorId/CUDA/g'
git grep -l 'VariableTensorId' | xargs sed -i 's/VariableTensorId/Autograd/g'
git grep -l 'HIPTensorId' | xargs sed -i 's/HIPTensorId/HIP/g'
git grep -l 'MSNPUTensorId' | xargs sed -i 's/MSNPUTensorId/MSNPU/g'
git grep -l 'XLATensorId' | xargs sed -i 's/XLATensorId/XLA/g'
git grep -l 'PrivateUse1_TensorId' | xargs sed -i 's/PrivateUse1_TensorId/PrivateUse1/g'
git grep -l 'PrivateUse2_TensorId' | xargs sed -i 's/PrivateUse2_TensorId/PrivateUse2/g'
git grep -l 'PrivateUse3_TensorId' | xargs sed -i 's/PrivateUse3_TensorId/PrivateUse3/g'
git grep -l 'AutocastTensorId' | xargs sed -i 's/AutocastTensorId/Autocast/g'
git grep -l '_PreAutogradTensorId' | xargs sed -i 's/_PreAutogradTensorId/_PreAutograd/g'
git grep -l 'TESTING_ONLY_GenericWrapperTensorId' | xargs sed -i 's/TESTING_ONLY_GenericWrapperTensorId/TESTING_ONLY_GenericWrapper/g'
git grep -l 'TESTING_ONLY_GenericModeTensorId' | xargs sed -i 's/TESTING_ONLY_GenericModeTensorId/TESTING_ONLY_GenericMode/g'
```

Then I did a git grep for remaining TensorId occurrences, and manually
killed those (mostly in codegen, and some docs that needed updating).

Signed-off-by: Edward Z. Yang <[email protected]>

Test Plan: Imported from OSS

Differential Revision: D20929255

Pulled By: ezyang

fbshipit-source-id: dc371b6aa6e6ea7c0a5660137c14debde806a09d
Summary:
Pull Request resolved: pytorch#36551

Before, those ops were special cased in the jit codegen but that blocks our unboxing refactoring.
Instead, make those regular prim ops.
ghstack-source-id: 102081858

Test Plan: waitforsandcastle

Differential Revision: D21009196

fbshipit-source-id: b90320fce589fc0553f17582b66a5a05d0fd32d1
…PackRecordsOp and UnPackRecordsOp (pytorch#36550)

Summary:
Pull Request resolved: pytorch#36550

Separate dataset_ops changes into a separate diff.

Test Plan:
```
buck test caffe2/caffe2/python/operator_test:dataset_ops_test
```

AI/AF canary (tested with D20959214):
https://our.intern.facebook.com/intern/experiment_store/experiment/3298538636995/#commit1-commit2
https://our.intern.facebook.com/intern/experiment_store/experiment/2199027015376/#commit1-commit2

Reviewed By: yinghai

Differential Revision: D20988910

fbshipit-source-id: b37a7bfd131813e9472a5e2fa24d681d1ef19018
…the two possible states

Test Plan: revert-hammer

Differential Revision:
D20147487

Original commit changeset: 50ce10b56f2b

fbshipit-source-id: e69432dc3d03002516cd248c84cfea08531c81be
…h#34768)

Summary:
Fixes pytorch#973

Common failure scenario:
* DataLoader creates workers and communicates with them through SHMs
* Workers send back through an AF_UNIX socket file descriptors to SHMs containing data
* The limit of open files gets fully used
* A FD gets stripped from a socket message coming back from a worker, without the worker knowing this.
* This causes a `RuntimeError: received 0 items of ancdata` in the standard `multiprocessing` package
* The exception is not handled by PyTorch and so is presented to the users.

After this change the user will see

```
Traceback (most recent call last):
  File "/home/wbaranowski/git/Quansight/pytorch/torch/utils/data/dataloader.py", line 761, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/home/wbaranowski/miniconda3/envs/pytorch-cuda-dev/lib/python3.6/multiprocessing/queues.py", line 113, in get
    return _ForkingPickler.loads(res)
  File "/home/wbaranowski/git/Quansight/pytorch/torch/multiprocessing/reductions.py", line 294, in rebuild_storage_fd
    fd = df.detach()
  File "/home/wbaranowski/miniconda3/envs/pytorch-cuda-dev/lib/python3.6/multiprocessing/resource_sharer.py", line 58, in detach
    return reduction.recv_handle(conn)
  File "/home/wbaranowski/miniconda3/envs/pytorch-cuda-dev/lib/python3.6/multiprocessing/reduction.py", line 184, in recv_handle
    return recvfds(s, 1)[0]
  File "/home/wbaranowski/miniconda3/envs/pytorch-cuda-dev/lib/python3.6/multiprocessing/reduction.py", line 162, in recvfds
    len(ancdata))
RuntimeError: received 0 items of ancdata

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/wbaranowski/git/Quansight/pytorch/torch/utils/data/dataloader.py", line 787, in _try_get_data
    fs = [tempfile.NamedTemporaryFile() for i in range(10)]
  File "/home/wbaranowski/git/Quansight/pytorch/torch/utils/data/dataloader.py", line 787, in <listcomp>
    fs = [tempfile.NamedTemporaryFile() for i in range(10)]
  File "/home/wbaranowski/miniconda3/envs/pytorch-cuda-dev/lib/python3.6/tempfile.py", line 551, in NamedTemporaryFile
    (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
  File "/home/wbaranowski/miniconda3/envs/pytorch-cuda-dev/lib/python3.6/tempfile.py", line 262, in _mkstemp_inner
    fd = _os.open(file, flags, 0o600)
OSError: [Errno 24] Too many open files: '/tmp/tmpnx_f6v_f'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "test_shm_leak.py", line 56, in <module>
    worker_init_fn=worker_init_fn
  File "/home/wbaranowski/git/Quansight/pytorch/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/wbaranowski/git/Quansight/pytorch/torch/utils/data/dataloader.py", line 861, in _next_data
    idx, data = self._get_data()
  File "/home/wbaranowski/git/Quansight/pytorch/torch/utils/data/dataloader.py", line 828, in _get_data
    success, data = self._try_get_data()
  File "/home/wbaranowski/git/Quansight/pytorch/torch/utils/data/dataloader.py", line 791, in _try_get_data
    "Too many open files. Communication with the"
RuntimeError: Too many open files. Communication with the workers is no longer possible. Please increase the limit using `ulimit -n` in the shell or change the sharing strategy by calling `torch.multiprocessing.set_sharing_strategy('file_system')` at the beginning of your code
```
Pull Request resolved: pytorch#34768

Differential Revision: D20538053

Pulled By: ezyang

fbshipit-source-id: be4425cf2fa02aff61619b2b829c153cb1a867cb
@csarofeen
Copy link
Owner Author

Merged! pytorch#36435

@csarofeen csarofeen closed this Apr 17, 2020
@csarofeen csarofeen deleted the unroll_interface_rebase branch May 17, 2020 14:11
shmsong pushed a commit to shmsong/pytorch that referenced this pull request Jul 6, 2021
Summary:
Pull Request resolved: pytorch#60987

We were seeing deadlocks as follows during shutdown:

```
Thread 1 (LWP 2432101):
#0  0x00007efca470190b in __pause_nocancel () from /lib64/libc.so.6
#1  0x00007efca49de485 in __pthread_mutex_lock_full () from /lib64/libpthread.so.0
#2  0x00007ef91d4c42c6 in __cuda_CallJitEntryPoint () from /lib64/libnvidia-ptxjitcompiler.so.1
#3  0x00007efc651ac8f1 in ?? () from /lib64/libcuda.so
#4  0x00007efc651aee03 in ?? () from /lib64/libcuda.so
csarofeen#5  0x00007efc64f76b84 in ?? () from /lib64/libcuda.so
csarofeen#6  0x00007efc64f77f5d in ?? () from /lib64/libcuda.so
csarofeen#7  0x00007efc64eac858 in ?? () from /lib64/libcuda.so
csarofeen#8  0x00007efc64eacfbc in ?? () from /lib64/libcuda.so
csarofeen#9  0x00007efc7810a924 in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#10 0x00007efc780fa2be in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#11 0x00007efc78111044 in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#12 0x00007efc7811580a in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#13 0x00007efc78115aa4 in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#14 0x00007efc781079ec in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#15 0x00007efc780e6a7a in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#16 0x00007efc7811cfa5 in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#17 0x00007efc777ea98c in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#18 0x00007efc777ebd80 in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#19 0x00007efc777ea2c9 in ?? () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#20 0x00007efc778c2e2d in cublasDestroy_v2 () from /usr/local/cuda/lib64/libcublas.so.11
csarofeen#21 0x00007efc51a3fb56 in std::_Sp_counted_ptr_inplace<at::cuda::(anonymous namespace)::DeviceThreadHandlePool<cublasContext*, &at::cuda::(anonymous namespace)::createCublasHandle, &at::cuda::(anonymous namespace)::destroyCublasHandle>, std::allocator<at::cuda::(anonymous namespace)::DeviceThreadHandlePool<cublasContext*, &at::cuda::(anonymous namespace)::createCublasHandle, &at::cuda::(anonymous namespace)::destroyCublasHandle> >, (__gnu_cxx::_Lock_policy)2>::_M_dispose() () from /data/users/pritam/pytorch/torch/lib/libtorch_cuda.so
csarofeen#22 0x00007efc51a3fc5f in std::shared_ptr<at::cuda::(anonymous namespace)::DeviceThreadHandlePool<cublasContext*, &at::cuda::(anonymous namespace)::createCublasHandle, &at::cuda::(anonymous namespace)::destroyCublasHandle> >::~shared_ptr() () from /data/users/pritam/pytorch/torch/lib/libtorch_cuda.so
csarofeen#23 0x00007efca4648b0c in __run_exit_handlers () from /lib64/libc.so.6
csarofeen#24 0x00007efca4648c40 in exit () from /lib64/libc.so.6
csarofeen#25 0x0000558c8852e5f9 in Py_Exit (sts=0) at /tmp/build/80754af9/python_1614362349910/work/Python/pylifecycle.c:2292
csarofeen#26 0x0000558c8852e6a7 in handle_system_exit () at /tmp/build/80754af9/python_1614362349910/work/Python/pythonrun.c:636
csarofeen#27 0x0000558c8852e742 in PyErr_PrintEx (set_sys_last_vars=<optimized out>, set_sys_last_vars=<optimized out>) at /tmp/build/80754af9/python_1614362349910/work/Python/pythonrun.c:646
csarofeen#28 0x0000558c88540dd6 in PyRun_SimpleStringFlags (command=0x7efca4dc9050 "from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=9, pipe_handle=13)\n", flags=0x7ffe3a986110) at /tmp/build/80754af9/python_1614362349910/work/Python/pythonrun.c:457
csarofeen#29 0x0000558c88540ead in pymain_run_command (cf=0x7ffe3a986110, command=<optimized out>) at /tmp/build/80754af9/python_1614362349910/work/Modules/main.c:420
csarofeen#30 pymain_run_python (pymain=0x7ffe3a986220) at /tmp/build/80754af9/python_1614362349910/work/Modules/main.c:2907
csarofeen#31 pymain_main (pymain=0x7ffe3a986220) at /tmp/build/80754af9/python_1614362349910/work/Modules/main.c:3460
csarofeen#32 0x0000558c8854122c in _Py_UnixMain (argc=<optimized out>, argv=<optimized out>) at /tmp/build/80754af9/python_1614362349910/work/Modules/main.c:3495
csarofeen#33 0x00007efca4632493 in __libc_start_main () from /lib64/libc.so.6
csarofeen#34 0x0000558c884e5e90 in _start () at ../sysdeps/x86_64/elf/start.S:103
```

This was likely caused due to a static singleton that wasn't leaky. Following
the guidance in https://isocpp.org/wiki/faq/ctors#construct-on-first-use-v2 to
use a leaky singleton instead.
ghstack-source-id: 132847448

Test Plan: Verified locally.

Reviewed By: malfet

Differential Revision: D29468866

fbshipit-source-id: 89250594c5cd2643417b1da584c658b742dc5a5c
jjsjann123 pushed a commit that referenced this pull request Jun 8, 2022
… of libtorch_python (pytorch#78028)

Summary:
This moves torch::class_<WorkerInfo> into `rpc_agent.cpp` so it gets registered in libtorch instead of libtorch_python. This is intermediate work to getting torch::deploy to load an unmodified copy of libtorch. Current RPC is incompatible due to duplicate registrations.

```
unknown file: Failure
C++ exception with description "Exception Caught inside torch::deploy embedded library:
Custom class with name __torch__.torch.classes.dist_rpc.WorkerInfo is already registered. Ensure that registration with torch::class_ is only called once.
Exception raised from registerCustomClass at ../aten/src/ATen/core/custom_class.cpp:61 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x3e (0x7f3bd9adb92e in /home/tristanr/venvs/multipy/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x5c (0x7f3bd9ab7068 in /home/tristanr/venvs/multipy/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #2: torch::registerCustomClass(std::shared_ptr<c10::ClassType>) + 0x110 (0x7f3bc2258980 in /home/tristanr/venvs/multipy/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #3: torch::detail::class_base::class_base(std::string const&, std::string const&, std::string, std::type_info const&, std::type_info const&) + 0x3b9 (0x7f3bc225a419 in /home/tristanr/venvs/multipy/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #4: [0x7f3ba45cfea1]
frame #5: <unknown function> + 0x1b5334 (0x5652bdab9334 in ./test_deploy)
frame #6: <unknown function> + 0x1b4f3e (0x5652bdab8f3e in ./test_deploy)
frame #7: <unknown function> + 0x1b519b (0x5652bdab919b in ./test_deploy)
frame #8: loadSearchFile(char const*) + 0x23e (0x7f3ba62f37f8 in /tmp/torch_deploy9ATEFg)
frame #9: deploy_set_self + 0x51 (0x7f3ba62f38f9 in /tmp/torch_deploy9ATEFg)
frame #10: torch::deploy::Interpreter::Interpreter(torch::deploy::InterpreterManager*, std::shared_ptr<torch::deploy::Environment>) + 0x274 (0x5652bdaaa790 in ./test_deploy)
frame #11: void __gnu_cxx::new_allocator<torch::deploy::Interpreter>::construct<torch::deploy::Interpreter, torch::deploy::InterpreterManager*, std::shared_ptr<torch::deploy::Environment>&>(torch::deploy::Interpreter*, torch::deploy::InterpreterManager*&&, std::shared_ptr<torch::deploy::Environment>&) + 0x81 (0x5652bdaaf58b in ./test_deploy)
frame #12: void std::allocator_traits<std::allocator<torch::deploy::Interpreter> >::construct<torch::deploy::Interpreter, torch::deploy::InterpreterManager*, std::shared_ptr<torch::deploy::Environment>&>(std::allocator<torch::deploy::Interpreter>&, torch::deploy::Interpreter*, torch::deploy::InterpreterManager*&&, std::shared_ptr<torch::deploy::Environment>&) + 0x4a (0x5652bdaae320 in ./test_deploy)
frame #13: void std::vector<torch::deploy::Interpreter, std::allocator<torch::deploy::Interpreter> >::_M_realloc_insert<torch::deploy::InterpreterManager*, std::shared_ptr<torch::deploy::Environment>&>(__gnu_cxx::__normal_iterator<torch::deploy::Interpreter*, std::vector<torch::deploy::Interpreter, std::allocator<torch::deploy::Interpreter> > >, torch::deploy::InterpreterManager*&&, std::shared_ptr<torch::deploy::Environment>&) + 0xee (0x5652bdaae4a0 in ./test_deploy)
frame #14: void std::vector<torch::deploy::Interpreter, std::allocator<torch::deploy::Interpreter> >::emplace_back<torch::deploy::InterpreterManager*, std::shared_ptr<torch::deploy::Environment>&>(torch::deploy::InterpreterManager*&&, std::shared_ptr<torch::deploy::Environment>&) + 0xb6 (0x5652bdaad258 in ./test_deploy)
frame #15: torch::deploy::InterpreterManager::InterpreterManager(unsigned long, std::shared_ptr<torch::deploy::Environment>) + 0x123 (0x5652bdaa83b1 in ./test_deploy)
frame #16: TorchpyTest_InitTwice_Test::TestBody() + 0x65 (0x5652bda075a9 in ./test_deploy)
frame #17: void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x65 (0x5652bda944b7 in ./test_deploy)
frame #18: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x5a (0x5652bda8cfe7 in ./test_deploy)
frame #19: testing::Test::Run() + 0x100 (0x5652bda68622 in ./test_deploy)
frame #20: testing::TestInfo::Run() + 0x10f (0x5652bda68fb3 in ./test_deploy)
frame #21: testing::TestSuite::Run() + 0x121 (0x5652bda6980d in ./test_deploy)
frame #22: testing::internal::UnitTestImpl::RunAllTests() + 0x38e (0x5652bda756e6 in ./test_deploy)
frame #23: bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) + 0x65 (0x5652bda9586b in ./test_deploy)
frame #24: bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) + 0x5a (0x5652bda8e0f7 in ./test_deploy)
frame #25: testing::UnitTest::Run() + 0xc9 (0x5652bda73fd1 in ./test_deploy)
frame #26: RUN_ALL_TESTS() + 0x11 (0x5652bda169fa in ./test_deploy)
frame #27: main + 0x27 (0x5652bda10ce2 in ./test_deploy)
frame #28: <unknown function> + 0x2d310 (0x7f3bc0431310 in /usr/lib/libc.so.6)
frame #29: __libc_start_main + 0x81 (0x7f3bc04313c1 in /usr/lib/libc.so.6)
frame #30: _start + 0x25 (0x5652bda063b5 in ./test_deploy)
```

Test Plan: CI

Differential Revision: D36564258

Pull Request resolved: pytorch#78028
Approved by: https://github.com/rohan-varma
jjsjann123 pushed a commit that referenced this pull request Aug 29, 2022
Hi!

I was playing with libfuzzer and found bug when loading a model from file via `torch::jit::load` function.
There is an unhandled exception in caffe2/serialize when calling a `stoull` function on unsanitized version string.

The bug can be reproduced with `aot_model_compiler` binary:
```
aot_model_compiler --model=crash-stoull --model_name=name --model_version=1 --input_dims='1,3,224,224;2,2' --input_types='float;float'
```

Crash file is provided in [crash.zip](https://github.com/pytorch/pytorch/files/8701504/crash.zip).

gdb output:
```
Temporary breakpoint 1, main (argc=6, argv=0x7ffcd160f9f8) at /pytorch_master/binaries/aot_model_compiler.cc:87
87	      "Run NNC AOT compiler for pytorch model. Example usage:\n"
(gdb) c
Continuing.
terminate called after throwing an instance of 'std::invalid_argument'
  what():  stoull

Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50	../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007fa637f16859 in __GI_abort () at abort.c:79
#2  0x00007fa6381c1911 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#3  0x00007fa6381cd38c in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#4  0x00007fa6381cd3f7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#5  0x00007fa6381cd6a9 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00007fa6381c42ce in std::__throw_invalid_argument(char const*) () from /lib/x86_64-linux-gnu/libstdc++.so.6
#7  0x000000000247d567 in __gnu_cxx::__stoa<unsigned long long, unsigned long long, char, int> (__str=0x7ffcd160f228 "ZZ", __idx=0x0, __base=10, __convf=<optimized out>, __name=<optimized out>)
    at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/ext/string_conversions.h:83
#8  std::__cxx11::stoull (__str="ZZ", __idx=0x0, __base=10) at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/basic_string.h:6577
#9  caffe2::serialize::PyTorchStreamReader::init (this=this@entry=0x8c11ce0) at /pytorch_master/caffe2/serialize/inline_container.cc:145
#10 0x000000000247d9c7 in caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader (this=0x8c11ce0, in=std::shared_ptr<class caffe2::serialize::ReadAdapterInterface> (empty) = {...})
    at /pytorch_master/caffe2/serialize/inline_container.cc:88
#11 0x00000000035b7ba4 in __gnu_cxx::new_allocator<caffe2::serialize::PyTorchStreamReader>::construct<caffe2::serialize::PyTorchStreamReader, std::shared_ptr<caffe2::serialize::ReadAdapterInterface> > (
    __p=0x2, __args=..., this=<optimized out>) at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/ext/new_allocator.h:150
#12 std::allocator_traits<std::allocator<caffe2::serialize::PyTorchStreamReader> >::construct<caffe2::serialize::PyTorchStreamReader, std::shared_ptr<caffe2::serialize::ReadAdapterInterface> > (__a=...,
    __p=0x2, __p@entry=0x8c11ce0, __args=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/alloc_traits.h:512
#13 0x00000000035b1988 in std::_Sp_counted_ptr_inplace<caffe2::serialize::PyTorchStreamReader, std::allocator<caffe2::serialize::PyTorchStreamReader>, (__gnu_cxx::_Lock_policy)2>::_Sp_counted_ptr_inplace<std::shared_ptr<caffe2::serialize::ReadAdapterInterface> > (this=0x8c11cd0, __a=..., __args=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr_base.h:551
#14 std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<caffe2::serialize::PyTorchStreamReader, std::allocator<caffe2::serialize::PyTorchStreamReader>, std::shared_ptr<caffe2::serialize::ReadAdapterInterface> > (this=0x7ffcd160f3a8, __p=@0x7ffcd160f3a0: 0x10, __args=..., __a=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr_base.h:683
#15 std::__shared_ptr<caffe2::serialize::PyTorchStreamReader, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<caffe2::serialize::PyTorchStreamReader>, std::shared_ptr<caffe2::serialize::ReadAdapterInterface> > (this=0x7ffcd160f3a0, __args=..., __tag=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr_base.h:1371
#16 std::shared_ptr<caffe2::serialize::PyTorchStreamReader>::shared_ptr<std::allocator<caffe2::serialize::PyTorchStreamReader>, std::shared_ptr<caffe2::serialize::ReadAdapterInterface> > (this=0x7ffcd160f3a0,
    __args=..., __tag=...) at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr.h:408
#17 std::allocate_shared<caffe2::serialize::PyTorchStreamReader, std::allocator<caffe2::serialize::PyTorchStreamReader>, std::shared_ptr<caffe2::serialize::ReadAdapterInterface> > (__args=..., __a=...)
    at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr.h:859
#18 std::make_shared<caffe2::serialize::PyTorchStreamReader, std::shared_ptr<caffe2::serialize::ReadAdapterInterface> > (__args=...)
    at /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/shared_ptr.h:875
#19 torch::jit::load (rai=std::shared_ptr<class caffe2::serialize::ReadAdapterInterface> (empty) = {...}, device=device@entry=..., Python Exception <class 'gdb.error'> No type named std::__detail::_Hash_node<struct std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, true>.:
extra_files=std::unordered_map with 0 elements)
    at /pytorch_master/torch/csrc/jit/serialization/import.cpp:474
#20 0x00000000035b1ef6 in torch::jit::load (filename="crash-stoull", device=device@entry=..., Python Exception <class 'gdb.error'> No type named std::__detail::_Hash_node<struct std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, true>.:
extra_files=std::unordered_map with 0 elements) at /pytorch_master/torch/csrc/jit/serialization/import.cpp:444
#21 0x00000000035b1d22 in torch::jit::load (filename="", device=device@entry=...) at /pytorch_master/torch/csrc/jit/serialization/import.cpp:424
#22 0x00000000008f9be3 in main (argc=1, argv=0x7ffcd160f9f8) at /pytorch_master/binaries/aot_model_compiler.cc:128
```

Pull Request resolved: pytorch#77557
Approved by: https://github.com/Gamrix
ftxj pushed a commit to ftxj/pytorch that referenced this pull request May 12, 2023
…ytorch#94300)

Hi!

I've been fuzzing different pytorch modules, and found a crash inside one of them.

Specifically, I'm talking about a module for unpickling and a function called `Unpickler::readInstruction()`. Running this function with provided crash file results in a crash, which occurs while calling `auto dict = stack_.at(dict_pos).toGenericDict();` [unpickler.cpp:561](https://github.com/pytorch/pytorch/blob/0e94fbc0c8ab1572c88159c1a4c397b6eb824c01/torch/csrc/jit/serialization/unpickler.cpp#L561). The crash occurs, because the index `dict_pos` is out of bounds (which itself happens because the stack size is 0).

Besides this pull-request, there is another one related to unpickler hardening: pytorch#84343

All tests were performed on this pytorch version: [abc54f9](https://github.com/pytorch/pytorch/tree/abc54f93145830b502400faa92bec86e05422fbd)

### How to reproduce

1. To reproduce the crash, use provided docker: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch)

2. Build the container: `docker build -t oss-sydr-fuzz-pytorch-reproduce .`

3. Copy crash file to the current directory:

    - [crash-042dff5e121580425d9d34d0f293918f3c9fbf1e.zip](https://github.com/pytorch/pytorch/files/10674361/crash-042dff5e121580425d9d34d0f293918f3c9fbf1e.zip)

4. Run the container: ``docker run --privileged --network host -v `pwd`:/homedir --rm -it oss-sydr-fuzz-pytorch-reproduce /bin/bash``

5. And execute the binary: `/message_deserialize_sydr /homedir/crash-042dff5e121580425d9d34d0f293918f3c9fbf1e`

After execution completes you will see this error message:

```txt
terminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 18446744073709551613) >= this->size() (which is 0)
```

And this stacktrace:

```asan
erminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 18446744073709551613) >= this->size() (which is 0)
==39== ERROR: libFuzzer: deadly signal
    #0 0x5d0df1 in __sanitizer_print_stack_trace /llvm-project/compiler-rt/lib/asan/asan_stack.cpp:87:3
    #1 0x545727 in fuzzer::PrintStackTrace() /llvm-project/compiler-rt/lib/fuzzer/FuzzerUtil.cpp:210:5
    csarofeen#2 0x52b933 in fuzzer::Fuzzer::CrashCallback() /llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:233:3
    csarofeen#3 0x7f9118e0341f  (/lib/x86_64-linux-gnu/libpthread.so.0+0x1441f)
    csarofeen#4 0x7f9118c2300a in raise (/lib/x86_64-linux-gnu/libc.so.6+0x4300a)
    csarofeen#5 0x7f9118c02858 in abort (/lib/x86_64-linux-gnu/libc.so.6+0x22858)
    csarofeen#6 0x7f9119040910  (/lib/x86_64-linux-gnu/libstdc++.so.6+0x9e910)
    csarofeen#7 0x7f911904c38b  (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa38b)
    csarofeen#8 0x7f911904c3f6 in std::terminate() (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa3f6)
    csarofeen#9 0x7f911904c6a8 in __cxa_throw (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa6a8)
    csarofeen#10 0x7f91190433aa  (/lib/x86_64-linux-gnu/libstdc++.so.6+0xa13aa)
    csarofeen#11 0x63acdf in std::vector<c10::IValue, std::allocator<c10::IValue> >::_M_range_check(unsigned long) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:1073:4
    csarofeen#12 0xce8f93e in std::vector<c10::IValue, std::allocator<c10::IValue> >::at(unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:1094:2
    csarofeen#13 0xce8f93e in torch::jit::Unpickler::readInstruction() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:546:26
    csarofeen#14 0xce8d527 in torch::jit::Unpickler::run() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:235:27
    csarofeen#15 0xce8d1c2 in torch::jit::Unpickler::parse_ivalue() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:192:3
    csarofeen#16 0xcdf0792 in torch::jit::unpickle(std::function<unsigned long (char*, unsigned long)>, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch_fuzz/torch/csrc/jit/serialization/pickle.cpp:127:20
    csarofeen#17 0xcdf104d in torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch_fuzz/torch/csrc/jit/serialization/pickle.cpp:137:10
    csarofeen#18 0xe0532db in torch::distributed::rpc::ScriptRemoteCall::fromMessage(torch::distributed::rpc::Message const&) /pytorch_fuzz/torch/csrc/distributed/rpc/script_remote_call.cpp:74:16
    csarofeen#19 0xe0ffa10 in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch_fuzz/torch/csrc/distributed/rpc/utils.cpp:108:14
    csarofeen#20 0x602a41 in LLVMFuzzerTestOneInput /message_deserialize_fuzz.cc:192:27
    csarofeen#21 0x52ce61 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
    csarofeen#22 0x516d7c in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
    csarofeen#23 0x51cacb in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
    csarofeen#24 0x546062 in main /llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
    csarofeen#25 0x7f9118c04082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
    csarofeen#26 0x51169d in _start (/message_deserialize_fuzz+0x51169d)

NOTE: libFuzzer has rudimentary signal handlers.
      Combine libFuzzer with AddressSanitizer or similar for better crash reports.
SUMMARY: libFuzzer: deadly signal
```
Pull Request resolved: pytorch#94300
Approved by: https://github.com/malfet, https://github.com/apach301
ftxj pushed a commit to ftxj/pytorch that referenced this pull request May 25, 2023
When tensor is resized, reference array to it's sizes may become invalid. Make a copy in advance.

<details>
<summary>ASAN report</summary>

```
=================================================================
==1115867==ERROR: AddressSanitizer: heap-use-after-free on address 0x61000013d790 at pc 0x03ff8e7da360 bp 0x03fff53c83a0 sp 0x03fff53c8390
READ of size 8 at 0x61000013d790 thread T0
    #0 0x3ff8e7da35f in c10::SymInt::is_heap_allocated() const /home/user/pytorch/c10/core/SymInt.h:154
    #1 0x3ff8e7da35f in c10::SymInt::maybe_as_int() const /home/user/pytorch/c10/core/SymInt.h:215
    csarofeen#2 0x3ff8e7d0a6d in c10::SymInt::sym_eq(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.cpp:69
    csarofeen#3 0x3ff7a9ab0bd in c10::SymInt::operator==(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.h:177
    csarofeen#4 0x3ff7a9aaedd in bool std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-
v11/bits/stl_algobase.h:1162
    csarofeen#5 0x3ff7a9aae4b in bool std::__equal_aux1<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/
stl_algobase.h:1211
    csarofeen#6 0x3ff7a9aae05 in bool std::__equal_aux<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/s
tl_algobase.h:1219
    csarofeen#7 0x3ff7a9aad97 in bool std::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_alg
obase.h:1556
    csarofeen#8 0x3ff4b23c771 in c10::ArrayRef<c10::SymInt>::equals(c10::ArrayRef<c10::SymInt>) const /home/user/pytorch/c10/util/ArrayRef.h:188
    csarofeen#9 0x3ff4cb91bc1 in bool c10::operator!=<c10::SymInt>(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>) /home/user/pytorch/c10/util/ArrayRef.h:341
    csarofeen#10 0x3ff6d1b57ff in torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/torch/csrc/autograd/Variab
leTypeManual.cpp:408
    csarofeen#11 0x3ff6d1e59c7 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
> >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
    csarofeen#12 0x3ff6d1e59c7 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::Sy
mInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::Disp
atchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
    csarofeen#13 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    csarofeen#14 0x3ff51ca6e8f in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    csarofeen#15 0x3ff51ca6e8f in at::Tensor const& c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Ten
sor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)
const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
    csarofeen#16 0x3ff5182006b in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c
10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
    csarofeen#17 0x3ff5182006b in at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2144
    csarofeen#18 0x3ff6d1d5e07 in at::redispatch::resize__symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/RedispatchFunctions.h:2847
    csarofeen#19 0x3ff6d1bbb67 in torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pyto
rch/torch/csrc/autograd/VariableTypeManual.cpp:243
    csarofeen#20 0x3ff6d1bd197 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10
::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFu
nctionIntoFunctor.h:13
    csarofeen#21 0x3ff6d1bd197 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor
 const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c
10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor
.h:480
    csarofeen#22 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    csarofeen#23 0x3ff5181ead1 in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    csarofeen#24 0x3ff5181ead1 in at::Tensor const& c10::Dispatcher::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor co
nst& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/at
en/src/ATen/core/dispatch/Dispatcher.h:639
    csarofeen#25 0x3ff5181ead1 in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>,
c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
    csarofeen#26 0x3ff5181ead1 in at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2137
    csarofeen#27 0x3ff79b44fcf in at::Tensor::resize__symint(c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const aten/src/ATen/core/TensorBody.h:2452
    csarofeen#28 0x3ff79a802db in torch::autograd::THPVariable_resize_(_object*, _object*, _object*)::$_0::operator()(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/us
er/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13417
    csarofeen#29 0x3ff7999f1eb in torch::autograd::THPVariable_resize_(_object*, _object*, _object*) /home/user/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13419
    csarofeen#30 0x3ffa2c9b009 in method_vectorcall_VARARGS_KEYWORDS Objects/descrobject.c:344
    csarofeen#31 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#32 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#33 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#34 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#35 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#36 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#37 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#38 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#39 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#40 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#41 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#42 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#43 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#44 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#45 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#46 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#47 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#48 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#49 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#50 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#51 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#52 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#53 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#54 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#55 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#56 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#57 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#58 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#59 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#60 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#61 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#62 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#63 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#64 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#65 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#66 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#67 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#68 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#69 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#70 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#71 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#72 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#73 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#74 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#75 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#76 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#77 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#78 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#79 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#80 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#81 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#82 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#83 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#84 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#85 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#86 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#87 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#88 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#89 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#90 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#91 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#92 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#93 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#94 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#95 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#96 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#97 0x3ffa2c8ab9b in PyVectorcall_Call Objects/call.c:267
    csarofeen#98 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#99 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#100 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#101 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#102 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#103 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#104 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#105 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#106 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#107 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#108 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#109 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#110 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#111 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#112 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#113 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#114 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#115 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#116 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#117 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#118 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#119 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#120 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#121 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#122 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#123 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#124 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#125 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#126 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#127 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#128 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#129 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#130 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#131 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#132 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#133 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#134 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#135 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#136 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#137 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#138 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#139 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#140 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#141 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#142 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#143 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#144 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#145 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#146 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#147 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#148 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#149 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#150 0x3ffa2c8ad17 in _PyObject_Call Objects/call.c:305
    csarofeen#151 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#152 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#153 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#154 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#155 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#156 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#157 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#158 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#159 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#160 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#161 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#162 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#163 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#164 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#165 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#166 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#167 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#168 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#169 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#170 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#171 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#172 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#173 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#174 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#175 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#176 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#177 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#178 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#179 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#180 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#181 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#182 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#183 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#184 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#185 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#186 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#187 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#188 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#189 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#190 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#191 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#192 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#193 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#194 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#195 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#196 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#197 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#198 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#199 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#200 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#201 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#202 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#203 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#204 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#205 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#206 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#207 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#208 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#209 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#210 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#211 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#212 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#213 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#214 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#215 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#216 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#217 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#218 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#219 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#220 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#221 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#222 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#223 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#224 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#225 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#226 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#227 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#228 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#229 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#230 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#231 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#232 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#233 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#234 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#235 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#236 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#237 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#238 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#239 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#240 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#241 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#242 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#243 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#244 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#245 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#246 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#247 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#248 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#249 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#250 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#251 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#252 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#253 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#254 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#255 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#256 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#257 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215

0x61000013d790 is located 80 bytes inside of 192-byte region [0x61000013d740,0x61000013d800)
freed by thread T0 here:
    #0 0x3ffa3237de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff8e7e3221 in c10::TensorImpl::~TensorImpl() /home/user/pytorch/c10/core/TensorImpl.cpp:75

previously allocated by thread T0 here:
    #0 0x3ffa323734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff4aeeb3d1 in c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_null_type<c10::TensorImpl> > c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_nul
l_type<c10::TensorImpl> >::make<c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >, c10::DispatchKeySet&, caffe2::TypeMeta&>(c10::intrusive_ptr<c10::S
torageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >&&, c10::DispatchKeySet&, caffe2::TypeMeta&) /home/user/pytorch/c10/util/intrusive_ptr.h:498
    csarofeen#2 0x3ff76f79e17  (/home/user/pytorch/build/lib.linux-s390x-cpython-310/torch/lib/libtorch_cpu.so+0x2fb79e17)

SUMMARY: AddressSanitizer: heap-use-after-free /home/user/pytorch/c10/core/SymInt.h:154 in c10::SymInt::is_heap_allocated() const
Shadow bytes around the buggy address:
  0x100c2000027aa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ac0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c2000027af0: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100c2000027b20: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b30: 00 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa
  0x100c2000027b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1115867==ABORTING
```
</details>

<details>
<summary>Additional backtraces (not full)</summary>

Memory deallocation:
```
#0  operator delete (ptr=0x61000013d740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ffa77e3222 in c10::TensorImpl::~TensorImpl (this=0x61000013d740) at /home/user/pytorch/c10/core/TensorImpl.cpp:75
csarofeen#2  0x000003ff63e76e8c in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:291
csarofeen#3  0x000003ff63e76910 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:370
csarofeen#4  0x000003ff63e67240 in at::TensorBase::~TensorBase (this=0x3ffd7ec8230) at /home/user/pytorch/aten/src/ATen/core/TensorBase.h:80
csarofeen#5  0x000003ff63e85ee0 in at::Tensor::~Tensor (this=0x3ffd7ec8230) at aten/src/ATen/core/TensorBody.h:90
csarofeen#6  0x000003ff63f67304 in resize__functionalization (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:173
csarofeen#7  0x000003ff63f89258 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (
    this=0x6030000390a0, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
csarofeen#8  c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (functor=0x6030000390a0, dispatchKeySet=..., args=..., args=...,
    args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
csarofeen#9  0x000003ff6aca560a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff63f88a80 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>, functor=0x6030000390a0,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
csarofeen#10 0x000003ff6aca715c in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1b28, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:96
csarofeen#11 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
csarofeen#12 0x000003ff6a82006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
csarofeen#13 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
csarofeen#14 0x000003ff861d5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
csarofeen#15 0x000003ff861b579e in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:401
```

Memory access:
```
#0  c10::SymInt::maybe_as_int (this=0x61000013d790) at /home/user/pytorch/c10/core/SymInt.h:215
#1  0x000003ff734d0a6e in c10::SymInt::sym_eq (this=0x61000013d790, sci=...) at /home/user/pytorch/c10/core/SymInt.cpp:69
csarofeen#2  0x000003ff5f6ab0be in c10::SymInt::operator== (this=0x61000013d790, o=...) at /home/user/pytorch/c10/core/SymInt.h:177
csarofeen#3  0x000003ff5f6aaede in std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1162
csarofeen#4  0x000003ff5f6aae4c in std::__equal_aux1<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1211
csarofeen#5  0x000003ff5f6aae06 in std::__equal_aux<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1219
csarofeen#6  0x000003ff5f6aad98 in std::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1556
csarofeen#7  0x000003ff2ff3c772 in c10::ArrayRef<c10::SymInt>::equals (this=0x3ffed7c9900, RHS=...) at /home/user/pytorch/c10/util/ArrayRef.h:188
csarofeen#8  0x000003ff31891bc2 in c10::operator!=<c10::SymInt> (a1=..., a2=...) at /home/user/pytorch/c10/util/ArrayRef.h:341
csarofeen#9  0x000003ff51eb5800 in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:408
csarofeen#10 0x000003ff51ee59c8 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c
10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
 > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (this=0x6030007dca40, args=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
csarofeen#11 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt
>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<
c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
csarofeen#12 0x000003ff369a512a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff51ee51f0 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::Ar
rayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKern
el*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>, functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
csarofeen#13 0x000003ff369a6e90 in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1bc8, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
csarofeen#14 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::Arr
ayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
csarofeen#15 0x000003ff3652006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&,
c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
csarofeen#16 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
csarofeen#17 0x000003ff51ed5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
csarofeen#18 0x000003ff51ebbb68 in torch::autograd::VariableType::(anonymous namespace)::resize_ (ks=..., self=..., size=..., optional_memory_format=...)
    at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:243
```
</details>
Pull Request resolved: pytorch#101064
Approved by: https://github.com/Skylion007, https://github.com/albanD
ftxj pushed a commit to ftxj/pytorch that referenced this pull request May 25, 2023
arguments() returns vector member of object returned by schema() call.
When object returned by schema() call is destroyed, the vector is deallocated as well,
it's lifetime isn't extended.

This issue detected while running `pytest -v test/mobile/test_lite_script_type.py -k test_nest_typing_namedtuple_custom_classtype` with ASAN.

<details>
<summary>ASAN output</summary>

```
==1134126==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0005a5790 at pc 0x03ff844488d8 bp 0x03fff584afe8 sp 0x03fff584afd8
READ of size 8 at 0x60d0005a5790 thread T0
    #0 0x3ff844488d7 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) /usr/lib/gcc/s390x-i
bm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028
    #1 0x3ff8444293f in std::vector<c10::Argument, std::allocator<c10::Argument> >::begin() const /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_vector.h:821
    csarofeen#2 0x3ff84d807d1 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:617
    csarofeen#3 0x3ff84d80305 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
    csarofeen#4 0x3ff84856871 in pybind11::detail::type_caster<c10::IValue, void>::cast(c10::IValue, pybind11::return_value_policy, pybind11::handle) /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
    csarofeen#5 0x3ff85318191 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const /home/user/pytorch/cmake/../third_party/pybin
d11/include/pybind11/pybind11.h:249
    csarofeen#6 0x3ff85317cfd in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) /home/user/pytorch/cmake/../third_party/pybind11/incl
ude/pybind11/pybind11.h:224
    csarofeen#7 0x3ff82ee52e9 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
    csarofeen#8 0x3ffab002903 in cfunction_call Objects/methodobject.c:543
    csarofeen#9 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#10 0x3ffaaf8e919 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#11 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#12 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#13 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#14 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#15 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#16 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#17 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#18 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#19 0x3ffaaf8a615 in _PyObject_FastCallDictTstate Objects/call.c:142
    csarofeen#20 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#21 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#22 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#23 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#24 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#25 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#26 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#27 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#28 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#29 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#30 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#31 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#32 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#33 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#34 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#35 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#36 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#37 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#38 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#39 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#40 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#41 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#42 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#43 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#44 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#45 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#46 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#47 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#48 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#49 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#50 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#51 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#52 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#53 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#54 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#55 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#56 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#57 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#58 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#59 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#60 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#61 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#62 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#63 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#64 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#65 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#66 0x3ffaaf8ab9b in PyVectorcall_Call Objects/call.c:267
    csarofeen#67 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#68 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#69 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#70 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#71 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#72 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#73 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#74 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#75 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#76 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#77 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#78 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#79 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#80 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#81 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#82 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#83 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#84 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#85 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#86 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#87 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#88 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#89 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#90 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#91 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#92 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#93 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#94 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#95 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#96 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#97 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#98 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#99 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#100 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#101 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#102 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#103 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#104 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#105 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#106 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#107 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#108 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#109 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#110 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#111 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#112 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#113 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#114 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#115 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#116 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#117 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#118 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#119 0x3ffaaf8ad17 in _PyObject_Call Objects/call.c:305
    csarofeen#120 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#121 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#122 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#123 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#124 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#125 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#126 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#127 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#128 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#129 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#130 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#131 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#132 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#133 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#134 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#135 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#136 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#137 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#138 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#139 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#140 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#141 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#142 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#143 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#144 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#145 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#146 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#147 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#148 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#149 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#150 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#151 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#152 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#153 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#154 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#155 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#156 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#157 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#158 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#159 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#160 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#161 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#162 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#163 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#164 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#165 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#166 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#167 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#168 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#169 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#170 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#171 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#172 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#173 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#174 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#175 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#176 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#177 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#178 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#179 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#180 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#181 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#182 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#183 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#184 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#185 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#186 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#187 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#188 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#189 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#190 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#191 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#192 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#193 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#194 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#195 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#196 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#197 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#198 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#199 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#200 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#201 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#202 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#203 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#204 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#205 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#206 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#207 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#208 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#209 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#210 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#211 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#212 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#213 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#214 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#215 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#217 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#218 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#219 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#220 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#221 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#222 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#223 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#224 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#225 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#226 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#227 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#228 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#229 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#230 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#231 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#232 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#233 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#234 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#235 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#236 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#237 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#238 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#239 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#240 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#241 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#242 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#243 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#244 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#245 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#246 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#247 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#248 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#249 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290

0x60d0005a5790 is located 80 bytes inside of 136-byte region [0x60d0005a5740,0x60d0005a57c8)
freed by thread T0 here:
    #0 0x3ffab537de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff55984fdb in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate(std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>*, unsigned long) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145

previously allocated by thread T0 here:
    #0 0x3ffab53734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff5598443f in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
    csarofeen#2 0x3fff5849ecf  ([stack]+0xb2ecf)

SUMMARY: AddressSanitizer: heap-use-after-free /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&)
Shadow bytes around the buggy address:
  0x100c1a000b4aa0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa
  0x100c1a000b4ab0: fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c1a000b4ac0: fd fd fd fd fd fa fa fa fa fa fa fa fa fa fd fd
  0x100c1a000b4ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
  0x100c1a000b4ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c1a000b4af0: fd fd[fd]fd fd fd fd fd fd fa fa fa fa fa fa fa
  0x100c1a000b4b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1134126==ABORTING
```

Additional backtraces (not full):
Allocation:
```
#0  __memset_z196 () at ../sysdeps/s390/memset-z900.S:144
#1  0x000003ff96f3072a in __asan::Allocator::Allocate (this=this@entry=0x3ff97041eb8 <__asan::instance>, size=size@entry=136, alignment=8, alignment@entry=0, stack=<optimized out>,
    stack@entry=0x3ffdbb45d78, alloc_type=<optimized out>, can_fill=true) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:599
csarofeen#2  0x000003ff96f2c088 in __asan::asan_memalign (alignment=alignment@entry=0, size=size@entry=136, stack=stack@entry=0x3ffdbb45d78, alloc_type=alloc_type@entry=__asan::FROM_NEW)
    at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:1039
csarofeen#3  0x000003ff96fb73b0 in operator new (size=136) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
csarofeen#4  0x000003ff41404440 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate (this=0x3ffdbb468c0,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
csarofeen#5  0x000003ff414042a0 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::allocate (__a=...,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:464
csarofeen#6  0x000003ff41403b66 in std::__allocate_guarded<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > > (__a=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:98
csarofeen#7  0x000003ff4140372a in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47888, __p=@0x3ffdbb47880: 0x0, __a=..., __args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:648
csarofeen#8  0x000003ff41403328 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1342
csarofeen#9  0x000003ff41402f06 in std::shared_ptr<c10::FunctionSchema>::shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (
    this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:409
csarofeen#10 0x000003ff41402b6e in std::allocate_shared<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__a=...,
    __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:862
csarofeen#11 0x000003ff4140215c in std::make_shared<c10::FunctionSchema, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:878
csarofeen#12 0x000003ff413d180c in c10::TupleType::createWithSpec<c10::basic_string_view<char> > (qualName=..., field_names=std::vector of length 1, capacity 1 = {...},
    field_types=std::vector of length 1, capacity 1 = {...}, field_defaults=std::vector of length 0, capacity 0) at /home/user/pytorch/aten/src/ATen/core/type.cpp:769
csarofeen#13 0x000003ff413b9ca6 in c10::TupleType::createNamed (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...})
    at /home/user/pytorch/aten/src/ATen/core/type.cpp:725
csarofeen#14 0x000003ff4115fbac in c10::ivalue::TupleTypeFactory<c10::TupleType>::fallback (type=...) at /home/user/pytorch/aten/src/ATen/core/dynamic_type.cpp:383
csarofeen#15 0x000003ff708217fe in c10::ivalue::Tuple::type<c10::TupleType> (this=0x6080004b8520) at /home/user/pytorch/aten/src/ATen/core/ivalue_inl.h:781
csarofeen#16 0x000003ff70800740 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
csarofeen#17 0x000003ff70800306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
csarofeen#18 0x000003ff702d6872 in pybind11::detail::type_caster<c10::IValue, void>::cast (src=...) at /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
csarofeen#19 0x000003ff70d98192 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const (this=0x3ffdbb4ca20, call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:249
csarofeen#20 0x000003ff70d97cfe in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) (call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:224
csarofeen#21 0x000003ff6e9652ea in pybind11::cpp_function::dispatcher (self=<PyCapsule at remote 0x3ff83e27720>,
    args_in=(<torch._C.LiteScriptModule at remote 0x3ff811844b0>, (<Tensor at remote 0x3ff814efb00>,)), kwargs_in=0x0) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
```

Deallocation:
```
#0  operator delete (ptr=0x60d0005a5740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ff44904fdc in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate (this=0x3ffc5dc8020,
    __p=0x60d0005a5740, __t=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145
csarofeen#2  0x000003ff44904fa8 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::deallocate (
    __a=..., __p=0x60d0005a5740, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:496
csarofeen#3  0x000003ff449041f2 in std::__allocated_ptr<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::~__allocated_ptr (
    this=0x3ffc5dc8030) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:74
csarofeen#4  0x000003ff44904888 in std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>::_M_destroy (this=0x60d0005a5740)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:538
csarofeen#5  0x000003ff43895a62 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:184
csarofeen#6  0x000003ff43895420 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x611000c40648) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
csarofeen#7  0x000003ff4466e7f4 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x611000c40640)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
csarofeen#8  0x000003ff4466d820 in std::shared_ptr<c10::FunctionSchema>::~shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
csarofeen#9  0x000003ff448d82f6 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
csarofeen#10 0x000003ff448d8346 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
csarofeen#11 0x000003ff731296a4 in std::_Sp_counted_ptr<c10::TupleType*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x603000c43ae0)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:348
csarofeen#12 0x000003ff71eaf666 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:168
csarofeen#13 0x000003ff71eaf330 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x3ffc5dc9368) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
csarofeen#14 0x000003ff73129ee4 in std::__shared_ptr<c10::TupleType, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x3ffc5dc9360)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
csarofeen#15 0x000003ff73122390 in std::shared_ptr<c10::TupleType>::~shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
csarofeen#16 0x000003ff73d00788 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
csarofeen#17 0x000003ff73d00306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
```
</details>
Pull Request resolved: pytorch#101400
Approved by: https://github.com/zou3519
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.