You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when encountering a generic function call, the compiler will make the generic instance inherit the eval branch quota of the instance's first caller. The intention here is that the caller can set whatever eval branch quota is made necessary by the comptime-known inputs to the generic function.
However, this system introduces a dependency on which use site of a generic function is the first to be sematically analyzed. It's also not trivial for us to give it the minimum quota of any call site, because it's possible that a call is seen after the instance's body is analyzed.
I consider status quo a bug, because it disallows parallelizing semantic analysis and would in theory require us to codify the exact analysis order into the language specification. It can also cause inconsistent behavior under incremental compilation. Because of these features, I'm attaching this to the 0.15.0 milestone.
I'm opening this issue to track this design flaw. We need to put some thought into how the eval branch quota works, and come up with a design that resolves both this and, ideally, #21324 (optimally without requiring a second "instantiation quota").
cc @SpexGuy -- if you have any thoughts, I'd be interested in hearing them.
The text was updated successfully, but these errors were encountered:
mlugg
added
the
bug
Observed behavior contradicts documented or intended behavior
label
Jan 5, 2025
It's also not trivial for us to give it the minimum quota of any call site, because it's possible that a call is seen after the instance's body is analyzed.
(Reposting a similar idea to my comment in the other issue, because it still seems viable to me:)
To me it seems like what would be trivial to do is abort compilation with an error should any one callsite's quota be insufficient:
For every completed instantiation ("first" encountered callsite, potentially several in parallel), save what the quota increase was.
(If several in parallel, we can assert that all jobs agreed on this quota increase.)
For any subsequent encountered call (memoized), increment the current quota by the previously-saved quota increase.
If this exceeds the current quota, fail compilation.
Afaiu one instantiation of a function (with one set of comptime information (arguments + captures from outer scopes) supplied) should always lead to the exact same quota increase
(provided that we implement the second point, to add all the quota increases from their callees / subordinate instantiations, regardless of whether those are first calls or memoized).
If that holds, parallel analyses of the same instantiation (at different callsites) should also always result in the same quota increase.
EDIT: One edge case I just realized would be if we care about consistency of generated error messages.
Because the first instantiation of a function might succeed / get further, a subsequent instantiation of it would have to be able to invalidate (remove) the error messages that weren't reachable with the lower branch quota.
(Afaiu currently the ordering of error messages with parallelized semantic analysis might already be non-deterministic.)
To fix this, the compiler might have to extend compile errors (if they don't already carry such information) with:
a reference to the function instantiation, to check its final (lowest) branch quota against, to determine whether an error should be invalidated
the instantiation-local eval branch quota increase at the time the error was generated (to compare the final lowest quota against)
(Less-important side note: I assume this would rapidly balloon the quota compared to current semantics - but it's just a made-up number, so this doesn't really matter.
However, doing this increase might annoy some users having to change the quotas for their code to keep compiling.
Therefore I'll link #16983 for ideas to drastically decrease the quota, which has a chance to mitigate that churn.)
mlugg
changed the title
design flaw: eval branch quota of generic function instances violates order indepdent analysis
design flaw: eval branch quota of generic function instances violates order independent analysis
Jan 7, 2025
Currently, when encountering a generic function call, the compiler will make the generic instance inherit the eval branch quota of the instance's first caller. The intention here is that the caller can set whatever eval branch quota is made necessary by the comptime-known inputs to the generic function.
However, this system introduces a dependency on which use site of a generic function is the first to be sematically analyzed. It's also not trivial for us to give it the minimum quota of any call site, because it's possible that a call is seen after the instance's body is analyzed.
I consider status quo a bug, because it disallows parallelizing semantic analysis and would in theory require us to codify the exact analysis order into the language specification. It can also cause inconsistent behavior under incremental compilation. Because of these features, I'm attaching this to the 0.15.0 milestone.
I'm opening this issue to track this design flaw. We need to put some thought into how the eval branch quota works, and come up with a design that resolves both this and, ideally, #21324 (optimally without requiring a second "instantiation quota").
cc @SpexGuy -- if you have any thoughts, I'd be interested in hearing them.
The text was updated successfully, but these errors were encountered: