Skip to content
This repository was archived by the owner on Aug 1, 2025. It is now read-only.
This repository was archived by the owner on Aug 1, 2025. It is now read-only.

Debug issue with tacotron2 #82

@jansel

Description

@jansel
Contributor

tacotron2 was recently added to torchbench with get_module() support, and it seems it does not work properly.

@anijain2305 reported an error running

python torchbench.py --devices=cuda --only=tacotron2

I haven't had a chance to look into this one yet, but creating an issue so that it does not get lost. Feel free to add more details @anijain2305.

Activity

added this to the Python Coverage milestone on Mar 22, 2022
jansel

jansel commented on Mar 31, 2022

@jansel
ContributorAuthor

The error message here is:

ERROR:root:unhandled error
Traceback (most recent call last):
  File "./torchbench.py", line 982, in run_one_model
    new_result = model_iter_fn(model, example_inputs)
  File "./torchbench.py", line 470, in forward_pass
    def forward_pass(mod, inputs, collect_outputs=True):
  File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1111, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jansel/torchbenchmark/torchbenchmark/models/tacotron2/model.py", line 499, in forward
    def forward(self, inputs):
  File "/home/jansel/torchbenchmark/torchbenchmark/models/tacotron2/model.py", line 499, in forward
    def forward(self, inputs):
  File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1111, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jansel/torchbenchmark/torchbenchmark/models/tacotron2/model.py", line 381, in forward
    def forward(self, memory, decoder_inputs, memory_lengths):
  File "/home/jansel/torchbenchmark/torchbenchmark/models/tacotron2/model.py", line 254, in get_go_frame
    decoder_input = Variable(memory.data.new(
  File "/home/jansel/torchbenchmark/torchbenchmark/models/tacotron2/model.py", line 243, in get_go_frame
    def get_go_frame(self, memory):
  File "/home/jansel/torchdynamo/torchdynamo/eval_frame.py", line 58, in _fn
    return fn(*args, **kwargs)
  File "/home/jansel/torchdynamo/torchdynamo/profiler.py", line 168, in _wrapped
    result = gm.forward(*args)
  File "<eval_with_key>.7", line 6, in forward
    variable = torch.autograd.variable.Variable(zero_);  zero_ = None
AttributeError: 'function' object has no attribute 'Variable'
anijain2305

anijain2305 commented on Apr 19, 2022

@anijain2305
Contributor

Repro here

import torch
import torchdynamo

def fn(a):
    b = torch.autograd.Variable(a)
    return b + 2

a = torch.randn(4)
ref = fn(a)

with torchdynamo.optimize("eager"):
    res = fn(a)

assert torch.allclose(ref, res)

The generated FX graph is this



def forward(self, a : torch.Tensor):
    variable = torch.autograd.variable.Variable(a);  a = None
    add = variable + 2;  variable = None
    return (add,)

For some reason, the generated FX graph has torch.autograd.variable.Variable, when it should have been torch.autograd.Variable.

jansel

jansel commented on Apr 19, 2022

@jansel
ContributorAuthor

This sounds like an upstream bug in pytorch.

The issue is:

>>> import torch.autograd
>>> torch.autograd.Variable.__module__
'torch.autograd.variable'
>>> 

I bet the following monkey patch will fix it:

torch.autograd.Variable.__module__ = 'torch.autograd'
anijain2305

anijain2305 commented on Apr 19, 2022

@anijain2305
Contributor

Ah, its because of the overloading of the variable function

https://github.com/pytorch/pytorch/blob/master/torch/autograd/__init__.py#L298-L300

anijain2305

anijain2305 commented on Apr 19, 2022

@anijain2305
Contributor

It is a bug in upstream PyTorch. But, given that Variable is deprecated for external use - https://pytorch.org/docs/stable/autograd.html#variable-deprecated

I suggest to change the tacotron model to use Tensors instead of Variables - https://github.com/pytorch/benchmark/blob/main/torchbenchmark/models/tacotron2/model.py#L270-L285. What do you think? @jansel

Unless the suggested change is just to add that single monkey-patch line in PyTorch

torch.autograd.Variable.__module__ = 'torch.autograd'
jansel

jansel commented on Apr 19, 2022

@jansel
ContributorAuthor

I'd just add the following to ./torch/autograd/__init__.py:

# Fix for FX codgen
Variable.__module__ = "torch.autograd"
jansel

jansel commented on Apr 19, 2022

@jansel
ContributorAuthor

We could the patch on our codebase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

      Development

      Participants

      @jansel@anijain2305

      Issue actions

        Debug issue with tacotron2 · Issue #82 · pytorch/torchdynamo