Skip to content

Conversation

xinhe-nv
Copy link
Collaborator

@xinhe-nv xinhe-nv commented Jul 24, 2025

skip fp8 tests on A100.
skip mistral-small-3.1-24b and gemma-3-27b on L20
test report https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/view/TRT-LLM-Function-Pipelines/job/LLM_FUNCTION_TEST_DEBUG/1463/allure/
https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/view/TRT-LLM-Function-Pipelines/job/LLM_FUNCTION_TEST_DEBUG/1464/allure/

Summary by CodeRabbit

  • Tests
    • Updated test skipping logic for FP8 prequantized model tests to use a new condition, improving test suite accuracy.
    • Added conditional skipping for certain multimodal model tests based on device memory requirements.
    • Introduced new test cases for the gemma-3-27b-it-gemma model with and without image input.
    • Added specific test cases to the skip list due to known issues.

Copy link
Contributor

coderabbitai bot commented Jul 24, 2025

📝 Walkthrough
## Walkthrough

Decorators to conditionally skip tests (`@skip_pre_hopper` and `pytest.mark.skip_less_device_memory`) were added to specific test cases in the integration test suite. Several FP8 prequantized model tests are now skipped under certain conditions, and two multimodal model tests are skipped if device memory is insufficient. Two new test cases were added to a test list. Additionally, test skip entries were added for known bugs in the multimodal test suite.

## Changes

| File(s)                                                                                   | Change Summary                                                                                      |
|------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
| tests/integration/defs/accuracy/test_cli_flow.py                                         | Replaced `@skip_pre_ada` with `@skip_pre_hopper` on multiple FP8 prequantized test methods.       |
| tests/integration/defs/accuracy/test_llm_api_pytorch.py                                  | Added `@skip_pre_hopper` decorator to multiple FP8 prequantized test methods across classes.      |
| tests/integration/defs/test_e2e.py                                                       | Updated two parameterized test cases to use `pytest.param` with `skip_less_device_memory(80000)`.  |
| tests/integration/test_lists/qa/llm_sanity_test.txt                                      | Added two new test cases for `gemma-3-27b-it-gemma` with image input enabled and disabled.         |
| tests/integration/test_lists/waives.txt                                                  | Added two skip entries for multimodal test cases referencing known bugs (nvbugs/5401114, 5414909).|

## Sequence Diagram(s)

```mermaid
sequenceDiagram
    participant Tester
    participant Pytest
    participant Decorator

    Tester->>Pytest: Run integration tests
    Pytest->>Decorator: Evaluate @skip_pre_hopper/@skip_less_device_memory
    alt Skip condition met
        Decorator-->>Pytest: Mark test as skipped
        Pytest-->>Tester: Report test skipped
    else Condition not met
        Pytest->>Tester: Run test normally
    end

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

  • test: skip llama3.3 70b test on cg4 #6293: Changes skip decorators from @skip_pre_ada to @skip_pre_hopper on FP8 prequantized tests in TestLlama3_3_70BInstruct, overlapping in scope but affecting different test methods with additional logic changes.

Suggested reviewers

  • crazydemo
  • LarryXFly
  • pamelap-nvidia
  • yilin-void

</details>

<!-- walkthrough_end -->


---

<details>
<summary>📜 Recent review details</summary>

**Configuration used: .coderabbit.yaml**
**Review profile: CHILL**
**Plan: Pro**


<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 654cc387acc03a0586a5867dbe2065c2188fed02 and 610376f6788bc60d8fe6ceaf311548b6efe66def.

</details>

<details>
<summary>📒 Files selected for processing (5)</summary>

* `tests/integration/defs/accuracy/test_cli_flow.py` (3 hunks)
* `tests/integration/defs/accuracy/test_llm_api_pytorch.py` (4 hunks)
* `tests/integration/defs/test_e2e.py` (1 hunks)
* `tests/integration/test_lists/qa/llm_sanity_test.txt` (1 hunks)
* `tests/integration/test_lists/waives.txt` (2 hunks)

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (5)</summary>

* tests/integration/test_lists/qa/llm_sanity_test.txt
* tests/integration/defs/test_e2e.py
* tests/integration/defs/accuracy/test_cli_flow.py
* tests/integration/test_lists/waives.txt
* tests/integration/defs/accuracy/test_llm_api_pytorch.py

</details>

<details>
<summary>⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)</summary>

* GitHub Check: Pre-commit Check

</details>

</details>
<!-- internal state start -->


<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEYDEZyAAUASpETZWaCrKNwSPbABsvkCiQBHbGlcSHFcLzpIACIaRE5IAG0AYQBJAF1IAEFaegAzbUj6BjREaRQMAkgAd20pRA1cAA9caJrS+2wBZnUaejkw2A9sMr4m+AwhsAwJdGRbSAxHAUpIADYAZi2NGCHIZm0sbmwKbnwy/jzB1AXUKuxuWmoPXD2mSonsfBGKmiIqcT4LAACjSAEoeBR8AxpDJ5IgANbwbjcCZERiUXCHMIhZCYegHDBoIhoyAFeBFRilcqvKHYIiwQYeWqSaSNFpkikkHYAZW4ongeXgJR8sgANEzrHZEcjkHluAAOHHxZBAyVZACMAAYtZAAOJWACqPC81Dy+AozHQGHoMu4yFeHh68SoXjAiAOPjAGw0GrAACYACwCa30UhOb0BgDsIeYihIXlVWEdkAAMv7ddxTbhzZadu5IEpEAwKMjAVgJgwvNgi5AvBMEQ78MrQv4zhRcHKLZApKWhSVy1xIAYoLBcLh7RwAPRT3iKDQCRPMDQzeC0eBoDRMZhTxDVAJoMA0DCIC0dsAHo8hKcSeAkapT6A2aBgVOpgCyYAAYtgMGJ4ECYBWMiCYTNIU5CIIU5vu+AD6X6GgAcsk0CpAA8ohsHQAAojy0CwQAIthABChp6lOGqBpsU5oD4JwkFOkDDqO46TjOc60AuS4rre66btuu77oex6nhQ56XnEuA3neD5Pi+MHfr+/6AcB/L1uQiAQVBMHwUhKHoZhOF4YRJFkRRVGBjRdH+FObh7As2jME2KDMHOUgtvYWJqPWuDyPifgkO24gYOiAyVha7bUKS5KUiUZTIBMVQsvU7K4BKvDSI4KhePIVS4FQDAIvwfDJR4CIYPg1RYDFpKSQlyZ7GkPAgep3K7B4tx4hIhTZR4uYBbe97oKEY4Tog05TiSrxdFuLBTohABqqQEakWSPmQonPq+H6zt4XhTps2zmJYyQsGwlTIA4TguHZHjbudnb8A1HV2BM64Dh4aB4Cwzz0E1qJqWB6D/revlhKWRCkP4/S5fZ0pdM6iAARg+Z7CMqzjJMJDTLMEzqBufSSk1FC/jU6iMim93+QABlOAj4K2v7UzsX74D4lW1QgiASimIlnrgYAMMKtqyPEJBWvl8CQ5QURoJAkEhmu7CCne9AdLYsF6uTXSQGYGr+gAnIGUYStUCAMIyTBidIZw2s5KbE6T/WvDcNiox4CvtMgJBNKIeBRAIeAKK5kSE9U5PoGShT0Z51AjLy/JC/2tE5TzeyplqsHvpQpA2IEwTxLBCwA6B5C6wbawAKyV5AZBKPQ4evJHADkX5ZKkqaGjY2HN7HuDx5AiHNmguT40CtH7Jg2AT/d7Cqnw67FiMSNAsg1QywFVv1xUkpuUrxRnXP7tRxSMfxHHl30qQKqDNQkoW5g1+/FCtDYDCMOSgsBzyLAaDuSU6NbQsBeLiYqjAhiFXqkTVIzVAZlyqDFCUgdQgVTJCcR0C8SBYgpPPcG3xFzSFgPgBmpIxb2nQP4CoVYaxRAmPfQ+F0NBGH0MYcAUA66XBwAQYgZBlCE1npULgvB+DCFEOIeokABhMCUFQVQ6gtA6BYSYKAcBUCoEwFwwgpByAAiiAIhIVBqidGuvIKR8ZZFqE0NoXQYBDCsNMAYOqU5EokH+FFIEU4lB5E0mgBgDATi+NkFOSSsEqzwFgnkLwlUNDcFkBwAw0REkGAsNkVIPCdG/WMQcFwnCH4hWkLdcBj9yhtlNDCSUSgrbUG7NTAAAnaWCGVYIjzQNTMmTc6kNKaUQlElA2lqjKL2Cekl9hYKIbQPEJZzjIGYN4cQWYQHxEYKaRA8Ud4O1TDAyJlVgb+IKrlEInR1BtT5AKZOoo04eEqRaapfAHhPBoPwPxJwkzxJHJAamIT5QKkaf4IImBxAAC86CwQnP6NpdDqbQBCKmU0BwNiwQ2IhcWDMoQYB5A8SggYDbEQkBqam7zPkhAiYqX5+cAXwGBbQNpNMQlsFfogNAJKfkZX+R8KlEKsBQphXCtACKNSwQVMRVIJ58pv1wASqARLC7fLJWyoFIKJyBk5R86F8RYVoHhYi2CUYtTCtFSTMQBKDBD34Bg/YigVYDmRvbPYIyokkgYOlZwmqsGUG5mA94+U2ZkiiUY5wToR5tQLHk0gkBfyCgtMwHK9gEB5EepTIE65ywTztKiEKZIoRWjltEJpLS2hkFvGih6YCsxmijQoG0Y9kzDxiN0/AvSKBtHeMm5GZJuyOguF+KwSpWXT3ZVEOqTDkmWCyF4GgAIbVhGbCmSpppJ2r04T7IKURuzHEXMKWuHxxAFPeQRUQNyCB8FDXQLgnSkTcDJc0p4bTABJhB8+pF6r09P5BQfpWA2CvEUB8r5pK+0UqpaC7g/pgRlC8HkCUcZCqUDBCqqspRkDcvVbyhFSKUXevRZiig2LcX4p3tKzszjKiuIXRgTxJBvE0Wefs4JxKwkRL9TE2QkrID7qqUeop+TaBnsfciK9LS70Pq6f4WCL6+lPVGV++gBHmVyv7Qq2goGEwQYtdBigsGd7wdWaqnlmq+WwQFUKkVLpxUqoI5pFxbjyzkco74vZgTaOF3o9s6oTGWNscPd2E93GhNPrzTeyA97z18frY299knxk/uJfSkYTLZX/oHYpsDKmoMIhg3BlZiG1W4A1Vqwz+qTNGvw04yzpGbM+Oow5kJznGOxPcwegE3YR71x48JkgomG2vvC5+yLMn4t/Pk5SxV3BAxKfA5B6EaX1MZYQzp5DenUM6r1cZsVRXIUleI1Z5G5WqP2YYEE6r9YGPRLqwkpJI4HEbb+GVrxFW9sHeJT4ZgzTUSNNkEei2TH4mJOiCO1J6S+FRCutk+Q+ArgnsQIU9ceQrjNYdHsYLl7QtdcLA125077AkCGb4H2SBgrohGT1xQkyoTadmeO5EkQPJabWet3ERHrvuLI7d3bAT9uOdwLBJ7L3wmxI+7ANz7sLhE4mRQjw/hsyDubN23tA2ANRDjEoRMYCZfukTlaytQoiABPLA0SApyk7ChTuKCpaOOPqNyFLt5Urf0srl4lsz2Xcv6f9AZ4iLG+t/vtwpoD4L8NO5Q4i5FcYMMYtfThvFHvbdyfl9S/3umtWu42O7wl0eEsKcdwn/TGpg+oqBIhTA+AhWR9T8S/r5KHfx/iLnjDhpx1UGNaahmQxj2/3ybaxZoQHXCmdVQT97qJReaBN63wLmxeTyUMfa5jW+DNerZPCgjYmQXDqhjlY9gL38noL+GRlbW2Lq8WBD+1M2sdbC+7ShAbFjNkV1apnHePLwBDuLdg9+wEIcxDa4dx1sjjr4VOvKPYOdF1XXJdJoFdegNdLoesBgLdcIO8SHPdM3JrS3HzRHZ9TrcTKoEXD5LxFsWTdPIbJLZTDTOhGnObHLQPV3DUFPKADzGfdAVA1rPzETMTN9DHHA6mPAtPb3Ig33MbPIUgrAcgpDSghbIPdDNFMPLFHFEvOg5A2fJg3zELVgzA9g7AsZb9LgijfA8veVIggQoQ5ZWbUQ53BFJPWg1jBQxglrZQpHVQsLDgzQ6TbgsvL3CvBTQwmbbTUwwPHPSQ/PQvYvfFJA9jFA2w9A5HLAm/Zw3AnQngjwgw5LCUfxJ4WCNxbgWAHmS9JGYFdKHIobCUQKWCXIkgIwkQ7LGvNFOvfKVpM7X7C7MAIwK7EjJnHbEJEgf0bkWJb7JJFJduAHXRW0RwEHXJNva+QpbgF1fvUsQFJnThFMGTbgCcWCIIYURsLEDsWCcncQRXWiNpPIJScsL2C1aHVWSReQUpXxWqaoZsMtOhfuBZS6dXPIWQUkRXBMRYV1PEG0HgagWANeCOamPnEIGJaYtpEUZXV4O+SsasJQSOO0BfBEBOM5I3C5SUdgUscoZ2PYD45XamaIZ0Wot0D0FOb0X0AMYMMACYQrVoGlX4gk8MA4SMf0GMakukmoDebzTNFgewJ/BZMIB4SIZyYE97UEqYvvNpRuCmPYOfY4sHD5EE+IDQbJZEhpYUxAWCJQW8GEbYlFFwYEBUHUHUWDVGC3UXOWFtatVNC9DHTtTvKkNZAQakSArAbU4UJ0fUvybqCkFQCkdQE3CjPIMRVkGNRebKTmaQTvJMVHHU8oaUusWEW+LAI0/UYiThNgOMFwHYJvc1Q4v8FNXwIfSoKEUfP1TjJ+deShA4SfIwH/MdCdN/QAq5UQedN/BU5dC0QmKAjdWA5WHdRAqAKwaYt1WY+YhUpYlYtYyBTYznHYp/RQfY8NR4X6LgXAW404z4zEhAisqIPILNPk5/QUp4jHUUySMEyU9pRkamVUxAAAXiVM0FVI0HVNhC1Kxw9L1KzNkENONK1CMPMwZ1aOsxZw6K6Lc3qOYUu3p1KzaJCXrBVCnAPCnG50ZQwADNBVBOaE4Egv6LSW0UB2GJMTGOKUQOgA3PICMRGTinjI3mayl0lEnMvWnI2OcDnLmQXKeC8DaRGQcGOXWQR1QswAwvPOwoOK5CF2jLCGcFIFCBTDxJ7GcA3EqA+SZMPA2GjAEHZJ4ScClIjnXObHeC1x1xtS4CBGZAjif2JA8AmGOFCHDIIXoGBGpistIG/FojKGpghH8nMqvJcmsoqDsq3V6icpcoODcugBJhIC8pzObGb1WFv2TlAKrMDXhObI8gQs0HrL/1Iwf2ANyrAIgNLWgM3X7IQKMCHnICYR+ygqaMcRgs2zK3grx00lKgaGwt6N+zwsGMyWB2cFB3B3GN3XIubEoo8kRO3Nosv1QLtL2FguON4ttMys5Cp0Avmu22asQratSmZnak5AoCWUxPkFvMlGoupCi0Lk6O6LiQ4BCWWOYuwHWPPi2PnL2K8ESDUpZLZPUB0oOEmnFmZI0tZK0p+tcuxi/A8pIHSB4ubHXzTToAlH8GDP8D/EjMkXpEgGWg+VGjYinBmAEG100krkDC1A1DJuVUgGBAqkeV5lkDOAEo8C7lTDBGPjKBbTgJyROt5kORoous5yuq+1uuJXutWMepnLYu2I4resSACHXgwH9DAAkDdG+oNXFTAAAEU5aFaFpUwwAoxiIwAVtDUBYwb3LEwoaYbJEPB4baBEaKMZZUaM0UxA50QsbqYcbxoZx8b6QibAxKIDYtQDZdriJm8bDq1vYWgOEUxcd4hSEsFOEbaWxkBX4XhmxypKoKxVkC5h1+icqmyZ0gDWyQCp0OzwCuzV0+B10YC4D8Zd0oBKrg03ghqnoY04cN9kQOadznYebqQoE5ZFqO7lqhRIgABua/WuMujsCuzwXsmusGfKt/FK04lWOgaq87FhAwZRLdSA2Hb6Hq/hBhAxNAIxPqnJMxGRFQSxBRGxQwLe+6dQWCNcTU/wQadeWgEo2cm++xKABUPIKMSuPINYBgDYWgf0BgAQWgKuJ4NYBUQMDYOgPIWgA2BUA2BgA2A2f0XxRBg2NANADwJRNhSASuNYAQQMQByuKMDYBUBgYmw2SuEebFWgQMSuWgI0jUBUSuCuNANYOByhgQHBr+revIBgKMPBo0/0f0WgKMQBtYCjah/0eBg2EgYMPIHBrUEgPBrYPIAQPIf0WRwRwhgOtYLUYB9BjYAQDUIMugDUDBhUDYNAPIDYBgUmhRwMOgAQLUDYZBkgSuXUAhiAdYYmvxKh0RvxTxtALUSuBUNYNAKJtYKMWgFYDMKuBgf0dh3+ugLUf0AxgJtYbUDYKMNYQBqMBUBUAQBgYx1h4MoBjRxxsm4msp2R6ptYPAjeu+s6B+p+2CF+mSEFDhfxqAZgBgS9L1H2TnZ60IDe6maZgwAAbwMCYmiBpKljHEQGiC4ESHSDFAWZiFqAoHQpCjWY2fSAMAAF8DBpnjUt6hmRnh8xnYJ+nbEgA== -->

<!-- internal state end -->
<!-- finishing_touch_checkbox_start -->

<details>
<summary>✨ Finishing Touches</summary>

- [ ] <!-- {"checkboxId": "7962f53c-55bc-4827-bfbf-6a18da830691"} --> 📝 Generate Docstrings
<details>
<summary>🧪 Generate unit tests</summary>

- [ ] <!-- {"checkboxId": "f47ac10b-58cc-4372-a567-0e02b2c3d479", "radioGroupId": "utg-output-choice-group-unknown_comment_id"} -->   Create PR with unit tests
- [ ] <!-- {"checkboxId": "07f1e7d6-8a8e-4e23-9900-8731c2c87f58", "radioGroupId": "utg-output-choice-group-unknown_comment_id"} -->   Post copyable unit tests in a comment

</details>

</details>

<!-- finishing_touch_checkbox_end -->
<!-- tips_start -->

---



<details>
<summary>🪧 Tips</summary>

### Chat

There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=NVIDIA/TensorRT-LLM&utm_content=6333):

- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
  - `I pushed a fix in commit <commit_id>, please review it.`
  - `Explain this complex logic.`
  - `Open a follow-up GitHub issue for this discussion.`
- Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples:
  - `@coderabbitai explain this code block.`
  -	`@coderabbitai modularize this function.`
- PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
  - `@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.`
  - `@coderabbitai read src/utils.ts and explain its main purpose.`
  - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.`
  - `@coderabbitai help me debug CodeRabbit configuration file.`

### Support

Need help? Create a ticket on our [support page](https://www.coderabbit.ai/contact-us/support) for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

### CodeRabbit Commands (Invoked using PR comments)

- `@coderabbitai pause` to pause the reviews on a PR.
- `@coderabbitai resume` to resume the paused reviews.
- `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
- `@coderabbitai full review` to do a full review from scratch and review all the files again.
- `@coderabbitai summary` to regenerate the summary of the PR.
- `@coderabbitai generate docstrings` to [generate docstrings](https://docs.coderabbit.ai/finishing-touches/docstrings) for this PR.
- `@coderabbitai generate sequence diagram` to generate a sequence diagram of the changes in this PR.
- `@coderabbitai generate unit tests` to generate unit tests for this PR.
- `@coderabbitai resolve` resolve all the CodeRabbit review comments.
- `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository.
- `@coderabbitai help` to get help.

### Other keywords and placeholders

- Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed.
- Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description.
- Add `@coderabbitai` or `@coderabbitai title` anywhere in the PR title to generate the title automatically.

### Documentation and Community

- Visit our [Documentation](https://docs.coderabbit.ai) for detailed information on how to use CodeRabbit.
- Join our [Discord Community](http://discord.gg/coderabbit) to get help, request features, and share feedback.
- Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.

</details>

<!-- tips_end -->

@xinhe-nv xinhe-nv requested review from crazydemo and LarryXFly July 24, 2025 14:30
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/integration/test_lists/waives.txt (1)

443-443: Nit: keep bug‐URL hostnames consistent

Elsewhere we sometimes use the full host https://nvbugspro.nvidia.com/bug/…. Using a uniform hostname (nvbugspro vs nvbugs) simplifies grep-based analytics on the waive list. Optional but worth considering for future entries.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7b6aadc and 8f75f6c.

📒 Files selected for processing (1)
  • tests/integration/test_lists/waives.txt (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
tests/integration/test_lists/waives.txt (1)

Learnt from: yiqingy0
PR: #5198
File: jenkins/mergeWaiveList.py:0-0
Timestamp: 2025-07-22T08:33:49.109Z
Learning: In the TensorRT-LLM waive list merging system, removed lines are always located at the end of the merge waive lists, which is why the mergeWaiveList.py script uses reverse traversal - it's an optimization for this specific domain constraint.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tests/integration/test_lists/waives.txt (1)

443-443: Entry format is valid – change accepted

The new skip line follows existing conventions (test node, SKIP, reason in parentheses). No duplicate of TestLlama3_2_1B::test_fp8_prequantized exists above, so the entry is safe to merge.

@xinhe-nv xinhe-nv force-pushed the user/qa/post_update_waive_20250724_LLM_FUNCTION_TEST_1037 branch from 8f75f6c to 56b4f65 Compare July 25, 2025 03:23
@coderabbitai coderabbitai bot requested review from yilin-void and yiqingy0 July 25, 2025 03:23
@xinhe-nv xinhe-nv force-pushed the user/qa/post_update_waive_20250724_LLM_FUNCTION_TEST_1037 branch from 56b4f65 to fc7ae80 Compare July 25, 2025 03:52
@coderabbitai coderabbitai bot requested a review from yuxianq July 25, 2025 03:52
@xinhe-nv xinhe-nv force-pushed the user/qa/post_update_waive_20250724_LLM_FUNCTION_TEST_1037 branch 2 times, most recently from 9060c39 to 654cc38 Compare July 25, 2025 04:46
@coderabbitai coderabbitai bot requested a review from pamelap-nvidia July 25, 2025 04:47
@xinhe-nv xinhe-nv changed the title Draft: test: [CI] Add failed cases into waives.txt test: [CI] Add failed cases into waives.txt Jul 25, 2025
@xinhe-nv xinhe-nv marked this pull request as ready for review July 25, 2025 05:14
@xinhe-nv xinhe-nv enabled auto-merge (squash) July 25, 2025 05:14
@xinhe-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12947 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12947 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #9655 completed with status: 'FAILURE'

Signed-off-by: Xin He (SW-GPU) <[email protected]>
@xinhe-nv xinhe-nv force-pushed the user/qa/post_update_waive_20250724_LLM_FUNCTION_TEST_1037 branch from 654cc38 to 610376f Compare July 25, 2025 06:53
@LarryXFly LarryXFly disabled auto-merge July 25, 2025 07:18
@LarryXFly LarryXFly merged commit 470544c into NVIDIA:main Jul 25, 2025
2 checks passed
@xinhe-nv xinhe-nv deleted the user/qa/post_update_waive_20250724_LLM_FUNCTION_TEST_1037 branch July 25, 2025 07:46
NVShreyas pushed a commit to NVShreyas/TensorRT-LLM that referenced this pull request Jul 28, 2025
Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: Shreyas Misra <[email protected]>
Ransiki pushed a commit to Ransiki/TensorRT-LLM that referenced this pull request Jul 29, 2025
Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: Ransiki Zhang <[email protected]>
lancelly pushed a commit to lancelly/TensorRT-LLM that referenced this pull request Aug 6, 2025
Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: Lanyu Liao <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants