-
Notifications
You must be signed in to change notification settings - Fork 13.3k
support PGO on custom project #110605
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support PGO on custom project #110605
Conversation
Failed to set assignee to
|
b40b1ba
to
6eade7b
Compare
Hi, the use-case seems reasonable. Although care should be taken that the custom command will generate the profiles into the correct files - you will thus probably want to distinguish between gathering profiles for LLVM/Rustc/BOLT even for custom runners, and pass the correct env. variables to them. Therefore, I would prefer a more "OOP-like" implementation, to reduce complexity and conditions in the logic of the individual stages, and also to move the various benchmark logic to a single place. Something like class BenchmarkRunner:
def run_rustc(self, pipeline: Pipeline):
def run_llvm(self, pipeline: Pipeline):
def run_bolt(self, pipeline: Pipeline):
# (or maybe just a single method that would receive an enum - Rustc/LLVM/BOLT)
class DefaultBenchmarkRunner(BenchmarkRunner):
def run_rustc(self, pipeline: Pipeline):
self.run_compiler_benchmarks(
pipeline,
profiles=["Check", "Debug", "Opt"],
scenarios=["All"],
crates=RUSTC_PGO_CRATES,
env=dict(
LLVM_PROFILE_FILE=str(pipeline.rustc_profile_template_path())
)
)
...
class CustomBenchmarkRunner(BenchmarkRunner):
... and then some function that creates the corresponding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good, this is how I imagined it :) Is this enough for you? Or do you also want to implement support for custom runners in this PR?
Really thanks for your kind review. # content of a custom build.py
from stage-build import BenchmarkRunner, run
class CustomRunner(BenchmarkRunner):
...
runner = CustomRunner()
run(runner) So I move the logic in |
Alright, that sounds reasonable :) Let's see if everything still works. @bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
⌛ Trying commit 27beb46 with merge 47ee1f814e2773f0403b768ccb711b607af2b80d... |
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (47ee1f814e2773f0403b768ccb711b607af2b80d): comparison URL. Overall result: no relevant changes - no action neededBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. @bors rollup=never Instruction countThis benchmark run did not return any relevant results for this metric. Max RSS (memory usage)ResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesThis benchmark run did not return any relevant results for this metric. Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 660.274s -> 659.215s (-0.16%) |
Perf looks good, LGTM. I don't have merge right, so reassigning. |
@bors r=Kobzol This seems fine to me, though of course no stability guarantees are offered on these scripts. |
☀️ Test successful - checks-actions |
Finished benchmarking commit (ba6f5e3): comparison URL. Overall result: ✅ improvements - no action needed@rustbot label: -perf-regression Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)ResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 641.167s -> 643.087s (0.30%) |
make PGO easier for custom toolchain distribution.
r? @Kobzol