Skip to content

Conversation

serenity4
Copy link
Member

This PR extracts the caching improvements from #56687, implemented by @aviatesk. It essentially defers global caching to the post-optimization step, giving a temporary cache to the optimizer instead of relying on the global cache.

The issue with caching globally before optimization is that any errors occurring within the optimizer may leave a partially initialized CodeInstance in the cache, which was meant to be updated post-optimization. Exceptions should not be thrown in the native compilation pipeline, but abstract interpreters extending optimization routines will frequently encounter some during an iterative development (see JuliaComputing/DAECompiler.jl#25).

An extra benefit from deferring global caching is that the optimizer can then more safely update CodeInstances with new inference information, as is the intent of #56687.

aviatesk and others added 4 commits February 2, 2025 22:20
In certain cases, the optimizer can introduce new type information.
This is particularly evident in SROA, where load forwarding can reveal
type information that was not visible during abstract interpretation.
In such cases, re-running abstract interpretation using this new type
information can be highly valuable, however, currently, this only occurs
when semi-concrete interpretation happens to be triggered.

This commit introduces a new "post-optimization inference" phase at the
end of the optimizer pipeline. When the optimizer derives new type
information, this phase performs IR abstract interpretation to further
optimize the IR.
Copy link
Member

@aviatesk aviatesk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this refactoring.

@vtjnash vtjnash added compiler:inference Type inference merge me PR is reviewed. Merge when all tests are passing backport 1.12 Change should be backported to release-1.12 labels May 8, 2025
@Keno Keno merged commit 24d2f4a into JuliaLang:master May 9, 2025
4 of 7 checks passed
serenity4 added a commit to serenity4/Cthulhu.jl that referenced this pull request May 9, 2025
serenity4 added a commit to serenity4/Cthulhu.jl that referenced this pull request May 9, 2025
serenity4 added a commit to serenity4/Cthulhu.jl that referenced this pull request May 9, 2025
serenity4 added a commit to JuliaDebug/Cthulhu.jl that referenced this pull request May 9, 2025
* Adjust to JuliaLang/julia#58343

* Make branch static

* Bump version
serenity4 added a commit to serenity4/Diffractor.jl that referenced this pull request May 9, 2025
@serenity4 serenity4 deleted the cache-after-opt branch May 9, 2025 20:28
Keno pushed a commit to JuliaDiff/Diffractor.jl that referenced this pull request May 9, 2025
@KristofferC KristofferC mentioned this pull request May 9, 2025
58 tasks
@giordano giordano removed the merge me PR is reviewed. Merge when all tests are passing label May 11, 2025
charleskawczynski pushed a commit to charleskawczynski/julia that referenced this pull request May 12, 2025
…iaLang#58343)

This PR extracts the caching improvements from
JuliaLang#56687, implemented by @aviatesk.
It essentially defers global caching to the post-optimization step,
giving a temporary cache to the optimizer instead of relying on the
global cache.

The issue with caching globally before optimization is that any errors
occurring within the optimizer may leave a partially initialized
`CodeInstance` in the cache, which was meant to be updated
post-optimization. Exceptions should not be thrown in the native
compilation pipeline, but abstract interpreters extending optimization
routines will frequently encounter some during an iterative development
(see JuliaComputing/DAECompiler.jl#25).

An extra benefit from deferring global caching is that the optimizer can
then more safely update `CodeInstance`s with new inference information,
as is the intent of JuliaLang#56687.

---------

Co-authored-by: Shuhei Kadowaki <[email protected]>
Co-authored-by: Cody Tapscott <[email protected]>
charleskawczynski pushed a commit to charleskawczynski/julia that referenced this pull request May 12, 2025
…iaLang#58343)

This PR extracts the caching improvements from
JuliaLang#56687, implemented by @aviatesk.
It essentially defers global caching to the post-optimization step,
giving a temporary cache to the optimizer instead of relying on the
global cache.

The issue with caching globally before optimization is that any errors
occurring within the optimizer may leave a partially initialized
`CodeInstance` in the cache, which was meant to be updated
post-optimization. Exceptions should not be thrown in the native
compilation pipeline, but abstract interpreters extending optimization
routines will frequently encounter some during an iterative development
(see JuliaComputing/DAECompiler.jl#25).

An extra benefit from deferring global caching is that the optimizer can
then more safely update `CodeInstance`s with new inference information,
as is the intent of JuliaLang#56687.

---------

Co-authored-by: Shuhei Kadowaki <[email protected]>
Co-authored-by: Cody Tapscott <[email protected]>
@KristofferC
Copy link
Member

Doesn't backport cleanly.

@serenity4
Copy link
Member Author

Doesn't backport cleanly.

Is it the PR author who backports it in this case? In any case, I'll do this one.

@Keno
Copy link
Member

Keno commented May 28, 2025

I don't think the backport is necessarily required.

@serenity4
Copy link
Member Author

That's true, it can definitely wait for 1.13.

@KristofferC KristofferC mentioned this pull request Jun 6, 2025
60 tasks
@aviatesk aviatesk removed the backport 1.12 Change should be backported to release-1.12 label Jun 7, 2025
@aviatesk
Copy link
Member

aviatesk commented Jun 7, 2025

Removed the backport label.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants