Skip to content

Fix a bug in LinearActivationQuantizedTensor #1400

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Dec 11, 2024

Conversation

jerryzh168
Copy link
Contributor

@jerryzh168 jerryzh168 commented Dec 11, 2024

Summary:
quant_kwargs is not populated in some places, that results in some errors when using DTensor.from_float, this PR fixes it.

Previous error:

E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]   File "/data/users/jerryzh/ao/test/dtypes/test_affine_quantized_tensor_parallel.py", line 102, in _test_tp
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]     up_dist = self.colwise_shard(up_quant, mesh)
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]   File "/data/users/jerryzh/ao/test/dtypes/test_affine_quantized_tensor_parallel.py", line 41, in colwise_shard
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]     dtensor = DTensor.from_local(local_shard, mesh, [Shard(0)])
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]   File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/distributed/tensor/_api.py", line 425, in from_
local
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]     return _FromTorchTensor.apply(  # pyre-ignore[16]: autograd func
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]   File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/autograd/function.py", line 575, in apply
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]     return super().apply(*args, **kwargs)  # type: ignore[misc]
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]   File "/home/jerryzh/.conda/envs/ao/lib/python3.10/site-packages/torch/distributed/tensor/_api.py", line 179, in forwa
rd
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]     input.view_as(input),
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]   File "/data/users/jerryzh/ao/torchao/utils.py", line 434, in _dispatch__torch_function__
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]     return func(*args, **kwargs)
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]   File "/data/users/jerryzh/ao/torchao/utils.py", line 449, in _dispatch__torch_dispatch__
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]     return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs)
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]   File "/data/users/jerryzh/ao/torchao/utils.py", line 410, in wrapper
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]     return func(f, types, args, kwargs)
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]   File "/data/users/jerryzh/ao/torchao/quantization/linear_activation_quantized_tensor.py", line 218, in _
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]     LinearActivationQuantizedTensor(
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733] TypeError: LinearActivationQuantizedTensor.__new__() missing 1 required positional argument: 'quant_kwargs'
E1211 13:23:28.887412 1248070 site-packages/torch/testing/_internal/common_distributed.py:733]

Test Plan:

# in local H100 machine only, CI does not support H100 right now
python test/dtypes/test_affine_quantized_tensor_parallel.py

Reviewers:

Subscribers:

Tasks:

Tags:

Copy link

pytorch-bot bot commented Dec 11, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1400

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 2bad194 with merge base cac5261 (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 11, 2024
@jerryzh168 jerryzh168 changed the title [WIP] fix a bug in LinearActivationQuantizedTensor Fix a bug in LinearActivationQuantizedTensor Dec 11, 2024
Copy link
Contributor

@drisspg drisspg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

@jerryzh168 jerryzh168 added the topic: bug fix Use this tag for PRs that fix bugs label Dec 11, 2024
Summary:
quant_kwargs is not populated in some places

Test Plan:
python test/dtypes/test_affine_quantized_tensor_parallel.py

Reviewers:

Subscribers:

Tasks:

Tags:
@jerryzh168 jerryzh168 merged commit 63b30ca into pytorch:main Dec 11, 2024
17 of 18 checks passed
amdfaa pushed a commit that referenced this pull request Jan 10, 2025
* Fix a bug in LinearActivationQuantizedTensor

Summary:
quant_kwargs is not populated in some places

Test Plan:
python test/dtypes/test_affine_quantized_tensor_parallel.py

Reviewers:

Subscribers:

Tasks:

Tags:

* ruff
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: bug fix Use this tag for PRs that fix bugs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants