-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-7101][infra] Reuse passed tests #6894
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,94 @@ | ||||||||||||||
import argparse | ||||||||||||||
import os | ||||||||||||||
import sys | ||||||||||||||
import xml.etree.ElementTree as ET | ||||||||||||||
|
||||||||||||||
import test_rerun | ||||||||||||||
|
||||||||||||||
|
||||||||||||||
def get_passed_tests(input_file, output_file): | ||||||||||||||
if not os.path.exists(input_file): | ||||||||||||||
print(f"Input file {input_file} does not exist") | ||||||||||||||
return | ||||||||||||||
|
||||||||||||||
# Parse the JUnit XML file and extract passed test names | ||||||||||||||
passed_tests = [] | ||||||||||||||
try: | ||||||||||||||
tree = ET.parse(input_file) | ||||||||||||||
root = tree.getroot() | ||||||||||||||
suite = root.find('testsuite') | ||||||||||||||
for testcase in suite.iter('testcase'): | ||||||||||||||
# Check test status | ||||||||||||||
has_failure = testcase.find('failure') is not None | ||||||||||||||
has_error = testcase.find('error') is not None | ||||||||||||||
has_skipped = testcase.find('skipped') is not None | ||||||||||||||
if not has_failure and not has_error and not has_skipped: | ||||||||||||||
# Parse the test name | ||||||||||||||
classname = testcase.attrib.get('classname', '') | ||||||||||||||
name = testcase.attrib.get('name', '') | ||||||||||||||
filename = testcase.attrib.get('file', '') | ||||||||||||||
test_name = test_rerun.parse_name(classname, name, filename) | ||||||||||||||
passed_tests.append(test_name) | ||||||||||||||
except Exception as e: | ||||||||||||||
print(f"Failed to parse {input_file}: {e}") | ||||||||||||||
return | ||||||||||||||
Comment on lines
+32
to
+34
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Catch narrow exception type and log to stderr Catching broad Exception hides unrelated issues. Limit to XML parse errors and send diagnostics to stderr. Apply this diff: - except Exception as e:
- print(f"Failed to parse {input_file}: {e}")
+ except ET.ParseError as e:
+ print(f"Failed to parse {input_file}: {e}", file=sys.stderr)
return 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents
|
||||||||||||||
|
||||||||||||||
# Write passed test names to output file, one per line | ||||||||||||||
with open(output_file, 'w') as f: | ||||||||||||||
for test in passed_tests: | ||||||||||||||
f.write(test + '\n') | ||||||||||||||
|
||||||||||||||
|
||||||||||||||
def remove_passed_tests(input_file, passed_tests_file): | ||||||||||||||
if not os.path.exists(input_file): | ||||||||||||||
print(f"Input file {input_file} does not exist") | ||||||||||||||
return | ||||||||||||||
if not os.path.exists(passed_tests_file): | ||||||||||||||
print(f"Passed tests file {passed_tests_file} does not exist") | ||||||||||||||
return | ||||||||||||||
|
||||||||||||||
Comment on lines
+42
to
+49
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion remove_passed_tests: input semantics, performance, and I/O safety
Apply this diff: -def remove_passed_tests(input_file, passed_tests_file):
+def remove_passed_tests(input_file: str, passed_tests_file: str) -> None:
+ """Remove tests that already passed from a newline-delimited test list file.
+
+ Args:
+ input_file: Path to a file containing tests to run (one per line).
+ passed_tests_file: Path to a file containing already-passed tests (one per line).
+ """
if not os.path.exists(input_file):
- print(f"Input file {input_file} does not exist")
+ print(f"Input file {input_file} does not exist", file=sys.stderr)
return
if not os.path.exists(passed_tests_file):
- print(f"Passed tests file {passed_tests_file} does not exist")
+ print(f"Passed tests file {passed_tests_file} does not exist", file=sys.stderr)
return
- passed_tests = []
- # Read passed tests from file
- with open(passed_tests_file, 'r') as f:
- for line in f:
- passed_tests.append(line.strip())
+ # Read passed tests from file
+ with open(passed_tests_file, 'r', encoding='utf-8') as f:
+ passed_tests = {line.strip() for line in f if line.strip()}
- tests_to_keep = []
- # Remove passed tests from input file
- with open(input_file, 'r') as f:
- for line in f:
- if line.strip() not in passed_tests:
- tests_to_keep.append(line.strip())
+ # Remove passed tests from input file
+ with open(input_file, 'r', encoding='utf-8') as f:
+ tests_to_keep = [line.strip()
+ for line in f
+ if line.strip() and line.strip() not in passed_tests]
- # Delete input file
- try:
- os.remove(input_file)
- except Exception as e:
- print(f"Failed to delete {input_file}: {e}")
# Write tests to keep to input file
- with open(input_file, 'w') as f:
+ with open(input_file, 'w', encoding='utf-8') as f:
for test in tests_to_keep:
f.write(test + '\n') Also applies to: 50-71 |
||||||||||||||
passed_tests = [] | ||||||||||||||
# Read passed tests from file | ||||||||||||||
with open(passed_tests_file, 'r') as f: | ||||||||||||||
for line in f: | ||||||||||||||
passed_tests.append(line.strip()) | ||||||||||||||
|
||||||||||||||
tests_to_keep = [] | ||||||||||||||
# Remove passed tests from input file | ||||||||||||||
with open(input_file, 'r') as f: | ||||||||||||||
for line in f: | ||||||||||||||
if line.strip() not in passed_tests: | ||||||||||||||
tests_to_keep.append(line.strip()) | ||||||||||||||
|
||||||||||||||
# Delete input file | ||||||||||||||
try: | ||||||||||||||
os.remove(input_file) | ||||||||||||||
except Exception as e: | ||||||||||||||
print(f"Failed to delete {input_file}: {e}") | ||||||||||||||
# Write tests to keep to input file | ||||||||||||||
with open(input_file, 'w') as f: | ||||||||||||||
for test in tests_to_keep: | ||||||||||||||
f.write(test + '\n') | ||||||||||||||
|
||||||||||||||
|
||||||||||||||
if __name__ == '__main__': | ||||||||||||||
if (sys.argv[1] == "get_passed_tests"): | ||||||||||||||
parser = argparse.ArgumentParser() | ||||||||||||||
parser.add_argument('--input-file', | ||||||||||||||
required=True, | ||||||||||||||
help='Input XML file containing test results') | ||||||||||||||
parser.add_argument('--output-file', | ||||||||||||||
required=True, | ||||||||||||||
help='Output file to write passed tests') | ||||||||||||||
args = parser.parse_args(sys.argv[2:]) | ||||||||||||||
get_passed_tests(args.input_file, args.output_file) | ||||||||||||||
elif (sys.argv[1] == "remove_passed_tests"): | ||||||||||||||
parser = argparse.ArgumentParser() | ||||||||||||||
parser.add_argument('--input-file', | ||||||||||||||
required=True, | ||||||||||||||
help='Input XML file containing test results') | ||||||||||||||
parser.add_argument('--passed-tests-file', | ||||||||||||||
required=True, | ||||||||||||||
help='File containing passed tests') | ||||||||||||||
args = parser.parse_args(sys.argv[2:]) | ||||||||||||||
remove_passed_tests(args.input_file, args.passed_tests_file) |
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -1462,6 +1462,7 @@ def test_openai_completions_example(llm_root, llm_venv, backend: str): | |||||
|
||||||
@pytest.mark.parametrize("backend", ["pytorch", "trt"]) | ||||||
def test_openai_chat_example(llm_root, llm_venv, backend: str): | ||||||
pytest.fail("test_openai_chat_example") | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Remove unconditional failure; this breaks the entire param test.
Apply this diff: - pytest.fail("test_openai_chat_example")
+ # Removed accidental debug fail; enable test execution 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents
|
||||||
test_root = unittest_path() / "llmapi" / "apps" | ||||||
filter_expr = f"{backend} and not sampler" | ||||||
llm_venv.run_cmd([ | ||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
SBSA single-GPU: ensure reuseBuild correctness and commit guard.
Plumbing is correct. Per PR objective, reuse should be ignored when the Git commit differs. That logic isn’t present here. Consider deriving the reused build’s commit and comparing to
env.gitlabCommit
before passingreuseArtifactPath
.I can draft a helper to query the reused build’s recorded commit and null out reuse if it differs.
🏁 Script executed:
Length of output: 641
🏁 Script executed:
Length of output: 8047
🏁 Script executed:
Length of output: 15155
SBSA single-GPU: enforce commit check before reusing artifacts
launchJob currently sets reuseArtifactPath whenever reuseBuild is truthy — there is no verification that the reused build's recorded commit matches env.gitlabCommit. That allows reusing artifacts from a different commit.
Files to change:
Optional/related:
Suggested minimal diff (conceptual):
Replace in launchJob:
if (reuseBuild) {
parameters['reuseArtifactPath'] = "sw-tensorrt-generic/llm-artifacts/${JOB_NAME}/${reuseBuild}"
}
With:
if (reuseBuild) {
def validReuse = validateReuseBuildCommit(reuseBuild)
if (validReuse) {
parameters['reuseArtifactPath'] = "sw-tensorrt-generic/llm-artifacts/${JOB_NAME}/${validReuse}"
} else {
echo "Ignoring reuse_build=${reuseBuild}: recorded commit != ${env.gitlabCommit}"
}
}
I can draft the validateReuseBuildCommit helper (using Artifactory REST API or rtDownload to fetch a small commit marker) and a ready-to-drop patch if you want.
🤖 Prompt for AI Agents