-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
Defer global caching of CodeInstance
to post-optimization step
#58343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
In certain cases, the optimizer can introduce new type information. This is particularly evident in SROA, where load forwarding can reveal type information that was not visible during abstract interpretation. In such cases, re-running abstract interpretation using this new type information can be highly valuable, however, currently, this only occurs when semi-concrete interpretation happens to be triggered. This commit introduces a new "post-optimization inference" phase at the end of the optimizer pipeline. When the optimizer derives new type information, this phase performs IR abstract interpretation to further optimize the IR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this refactoring.
* Adjust to JuliaLang/julia#58343 * Make branch static * Bump version
…iaLang#58343) This PR extracts the caching improvements from JuliaLang#56687, implemented by @aviatesk. It essentially defers global caching to the post-optimization step, giving a temporary cache to the optimizer instead of relying on the global cache. The issue with caching globally before optimization is that any errors occurring within the optimizer may leave a partially initialized `CodeInstance` in the cache, which was meant to be updated post-optimization. Exceptions should not be thrown in the native compilation pipeline, but abstract interpreters extending optimization routines will frequently encounter some during an iterative development (see JuliaComputing/DAECompiler.jl#25). An extra benefit from deferring global caching is that the optimizer can then more safely update `CodeInstance`s with new inference information, as is the intent of JuliaLang#56687. --------- Co-authored-by: Shuhei Kadowaki <[email protected]> Co-authored-by: Cody Tapscott <[email protected]>
…iaLang#58343) This PR extracts the caching improvements from JuliaLang#56687, implemented by @aviatesk. It essentially defers global caching to the post-optimization step, giving a temporary cache to the optimizer instead of relying on the global cache. The issue with caching globally before optimization is that any errors occurring within the optimizer may leave a partially initialized `CodeInstance` in the cache, which was meant to be updated post-optimization. Exceptions should not be thrown in the native compilation pipeline, but abstract interpreters extending optimization routines will frequently encounter some during an iterative development (see JuliaComputing/DAECompiler.jl#25). An extra benefit from deferring global caching is that the optimizer can then more safely update `CodeInstance`s with new inference information, as is the intent of JuliaLang#56687. --------- Co-authored-by: Shuhei Kadowaki <[email protected]> Co-authored-by: Cody Tapscott <[email protected]>
Doesn't backport cleanly. |
Is it the PR author who backports it in this case? In any case, I'll do this one. |
I don't think the backport is necessarily required. |
That's true, it can definitely wait for 1.13. |
Removed the backport label. |
This PR extracts the caching improvements from #56687, implemented by @aviatesk. It essentially defers global caching to the post-optimization step, giving a temporary cache to the optimizer instead of relying on the global cache.
The issue with caching globally before optimization is that any errors occurring within the optimizer may leave a partially initialized
CodeInstance
in the cache, which was meant to be updated post-optimization. Exceptions should not be thrown in the native compilation pipeline, but abstract interpreters extending optimization routines will frequently encounter some during an iterative development (see JuliaComputing/DAECompiler.jl#25).An extra benefit from deferring global caching is that the optimizer can then more safely update
CodeInstance
s with new inference information, as is the intent of #56687.