Skip to content

Conversation

xin3he
Copy link
Contributor

@xin3he xin3he commented May 30, 2024

Type of Change

documentation

  • quantization.md Xin&Zehao
  • torch.md Xin
  • PT_DynamicQuant.md Yi
  • PT_StaticQuant.md Yi&Zixuan
    • IPEX
    • PT2E
  • PT_SmoothQuant.md Zixuan
  • PT_MXQuant.md Mengni
  • PT_MixPrecision.md Zehao
  • PT_WeightOnlyQuant.md Kaihui

Copy link

github-actions bot commented May 30, 2024

⛈️ Required checks status: Has failure 🔴

Warning
If you do not have the access to re-run the Probot, please contact XuehaoSun for help. If you push a new commit, all of the workflow will be re-triggered.

Groups summary

🟢 Code Scan Tests workflow
Check ID Status Error details
Code-Scan success
Code-Scan (Bandit Code Scan Bandit) success
Code-Scan (DocStyle Code Scan DocStyle) success
Code-Scan (Pylint Code Scan Pylint) success

These checks are required after the changes to neural_compressor/torch/quantization/autotune.py, neural_compressor/torch/quantization/load_entry.py, neural_compressor/torch/quantization/quantize.py.

🔴 Model Tests 3x workflow
Check ID Status Error details
Model-Test-3x failure
Model-Test-3x (Generate Report GenerateReport) no_status
Model-Test-3x (Run PyTorch Model opt_125m_woq_gptq_int4) success
Model-Test-3x (Run PyTorch Model opt_125m_woq_gptq_int4_dq_bnb) failure
Model-Test-3x (Run PyTorch Model opt_125m_woq_gptq_int4_dq_ggml) success

These checks are required after the changes to neural_compressor/torch/quantization/autotune.py, neural_compressor/torch/quantization/load_entry.py, neural_compressor/torch/quantization/quantize.py.

🔴 Unit Tests 3x-PyTorch workflow
Check ID Status Error details
UT-3x-Torch failure
UT-3x-Torch (Coverage Compare CollectDatafiles) failure download
UT-3x-Torch (Unit Test 3x Torch Unit Test 3x Torch) success
UT-3x-Torch (Unit Test 3x Torch baseline Unit Test 3x Torch baseline) success

These checks are required after the changes to neural_compressor/torch/quantization/autotune.py, neural_compressor/torch/quantization/load_entry.py, neural_compressor/torch/quantization/quantize.py.


Thank you for your contribution! 💜

Note
This comment is automatically generated and will be updates every 180 seconds within the next 6 hours. If you have any other questions, contact chensuyue or XuehaoSun for help.

@xin3he xin3he marked this pull request as draft May 30, 2024 05:51
Signed-off-by: xin3he <[email protected]>
@xin3he xin3he marked this pull request as ready for review June 3, 2024 02:45
@xin3he xin3he requested a review from thuang6 June 3, 2024 03:04
violetch24 and others added 3 commits June 3, 2024 12:18
Signed-off-by: Cheng, Zixuan <[email protected]>
* add pt dynamic quant

Signed-off-by: yiliu30 <[email protected]>

* correct typo

Signed-off-by: yiliu30 <[email protected]>

---------

Signed-off-by: yiliu30 <[email protected]>
mengniwang95 and others added 2 commits June 3, 2024 17:02
Signed-off-by: zehao-intel <[email protected]>
@zehao-intel zehao-intel self-requested a review June 4, 2024 06:12
Signed-off-by: yiliu30 <[email protected]>
Comment on lines +44 to +45
> [!IMPORTANT]
> To use static quantization with the IPEX backend, please explicitly import IPEX at the beginning of your program.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@violetch24 Please be aware of this usage behavior change that I have aligned with @ftian1 and @xin3he.

@chensuyue chensuyue merged commit ecffc2e into master Jun 4, 2024
@chensuyue chensuyue deleted the pt_doc branch June 4, 2024 08:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants