Skip to content

Allow run-time-only assertion checking in constant constructors #2581

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
eernstg opened this issue Oct 20, 2022 · 29 comments
Open

Allow run-time-only assertion checking in constant constructors #2581

eernstg opened this issue Oct 20, 2022 · 29 comments
Labels
enhanced-const Requests or proposals about enhanced constant expressions small-feature A small feature which is relatively cheap to implement.

Comments

@eernstg
Copy link
Member

eernstg commented Oct 20, 2022

Consider dart-lang/sdk#29276 by @Hixie, where this example is shown:

abstract class MultiChildRenderObjectWidget extends RenderObjectWidget {
  const MultiChildRenderObjectWidget({ Key key, this.children: const <Widget?>[] })
    : assert(children != null),
      assert(!children.any((Widget child) => child == null)), // <-- this line won't compile
      super(key: key);
  // ...
}

The example has been made somewhat obsolete because of null safety, but it can still be used to illustrate the idea. I'm assuming that the type of children is List<Widget?>, such that the nulls can still occur.

The second assert is a compile-time error because it contains an expression which is not constant. The issue requests improved support for code like this, in whatever way it could be done. One special consideration is mentioned by @tvolkert:

.. the fact that it's keeping MultiChildRenderObjectWidget from being const
constructible is keeping large swaths of app code from being const constructible
(because Row and Column are both descendants of MultiChildRenderObjectWidget).

We could introduce a variant of the initializer list assertion, using the same syntax as the current initializer list assertions, except that the first token is the identifier runtimeAssert (rather than assert). A compile-time error occurs if this kind of assertion occurs in a constructor which is not constant, or if it occurs as a statement.

An assertion of the form runtimeAssert(e) or runtimeAssert(e1, e2) would behave the same as assert(e) respectively assert(e1, e2) during program execution. For example, runtimeAssert(e) would complete throwing an AssertionError if assertions are enabled and e evaluates to false.

However, runtimeAssert(e) and runtimeAssert(e1, e2) is a potentially constant and a constant expression for any expression e, e1, e2. Evaluation of these assertions during constant expression evaluation will ignore the argument(s) and immediately complete normally.

In other words, a runtimeAssert is ignored during constant evaluation, but it is treated the same as other assertions at run time. The point is that we can now assert arbitrary expressions in constant constructors, not just constant expressions.

This mechanism could be enhanced to include support for evaluating each runtimeAssert at run time when assertions are enabled, e.g., just before the execution of main. This would presumably be an implementation specific feature (such that each platform does whatever is most useful or convenient).

Presumably, this could be done by implicitly generating a static method to go along with each constant constructor with a runtimeAssert, accepting a corresponding parameter list and containing an assert with the same arguments, generating a sequence of invocations corresponding to each constant expression where said constructor is invoked, with the same actual arguments, and executing those function invocations just before invoking main.

For the given example we would have this outcome:

abstract class MultiChildRenderObjectWidget extends RenderObjectWidget {
  const MultiChildRenderObjectWidget({ Key key, this.children: const <Widget?>[] })
    : assert(children != null),
      runtimeAssert(!children.any((Widget child) => child == null)), // OK
      super(key: key);
  // ...
}

In the minimal proposal we would now get assertion checking for all dynamically created subclasses of MultiChildRenderObjectWidget, but we would skip the checks for null-valued children with every constant expression.

This may actually be acceptable, because the associated run-time failures could be tracked down to a single constant expression every time, and we could then "debug" that expression (which is a lot simpler than debugging in general: there's no execution complexity to worry about because each constant expression has a value which is known at compile-time).

However, if we include support for pre-main checking of such assertions then violations of the assertion would also be detected in every constant expression, albeit only when assertions are enabled and the program is executed. Any test run would do this.

@Hixie, is this topic equally relevant today? Would it be useful at all to have the minimal mechanism where constant expressions omit the checks entirely, or would it only be relevant in the enhanced version where each dynamicAssert arising from a constant expression evaluation is actually evaluated at runtime?

@eernstg eernstg added small-feature A small feature which is relatively cheap to implement. enhanced-const Requests or proposals about enhanced constant expressions labels Oct 20, 2022
@rrousselGit
Copy link

rrousselGit commented Oct 20, 2022

What about allowing const constructors to have a body if exclusively composed of asserts?

class Example {
  const Example(List<String> list) {
    assert(list.any((e) => e.isNotEmpty));
  }
}

This could be a way to represent "this is a runtime assert" instead of a new keyword

@eernstg
Copy link
Member Author

eernstg commented Oct 20, 2022

Could do that. We would then need to say that constant evaluation, when invoking a constant constructor, would complete normally immediately at the point where normal execution would execute the body.

@lrhn
Copy link
Member

lrhn commented Oct 20, 2022

Could also just drop the requirement that the expressions of a const constructor assert is a constant expression, and only evaluate it at compile-time if it actually is constant.

We'd presumably only evaluate the runtime asserts if asserts are enabled anyway.

@Hixie
Copy link

Hixie commented Oct 20, 2022

I would much rather we expand the scope of what can be a constant expression. The value of a lot of these asserts is they get caught by the analyzer long before compile time, let alone runtime.

The specific assert in the original bug is going to be moot with Dart 3 (or whenever we remove non-null-safe mode) when we can strip all these asserts. Almost all the asserts that are stopping lots of code from being const are about nullness.

@eernstg
Copy link
Member Author

eernstg commented Oct 20, 2022

Thanks for the input, @Hixie!

Almost all the asserts that are stopping lots of code from being const are about nullness.

Very good! It sounds like we will get a substantial improvement by simply not needing many of those asserts when sound null safety has been reached. In any case, there would probably also be some asserts that are not about null.

I would much rather we expand the scope of what can be a constant expression

I suspect that this a much, much harder path: It would require the constant sublanguage to be extended radically if we were to cover expressions like someList.any((elm) => ...). Every tiny generalization that we have previously introduced tends to be quite heavy in practice..

@Hixie
Copy link

Hixie commented Oct 20, 2022

I agree it would cost more, but the value would be commensurate.

Adding language features makes the language harder to understand, and generally raises the cost of using Dart on everyone (since everyone has to learn the feature to understand code that uses the feature -- nothing is opt-in in a language, you have to deal with code other people write).

On the other hand, extending the const language is intuitive as it only reduces what the developer needs to know (because it pushes the limits of the language further out).

Yes, it would cost us (the Dart team) more. Reducing the cost of the language for everyone else is rarely cheap.

@Hixie
Copy link

Hixie commented Oct 20, 2022

See also dart-lang/sdk#29277 or dart-lang/sdk#27613. The benefits to being able to move execution to compile time would be far and wide.

@rrousselGit
Copy link

rrousselGit commented Oct 21, 2022

On the other hand, extending the const language is intuitive as it only reduces what the developer needs to know (because it pushes the limits of the language further out).

I believe that applies to the idea of allowing runtime asserts in const constructors too.

In terms of complexity, I think expanding the scope of const expressions would definitely be make the language harder.
At the very least, it would make the language more complex than the "runtime asserts in const constructors" imo.

Writing asserts is fairly niche, mostly done by package authors or more experienced developers. I don't see a newcomer writing asserts.

Expanding const expressions on the other hand impact would definitely impact newcomers. One reason would be lints such as "prefer instantiating with const keyword".
And "what is the difference between const and final" is a very common newcomer question, which includes constant expressions.

Also about dart-lang/sdk#29277 I would be worried that a package author could refactor a constant function into a function that isn't considered constant anymore.
That would make such a change a breaking change. But I'm sure many won't see it coming


I'm definitely in favor of expanding the scope of const expressions though.

Especially with static Metaprogramming.

After all, Metaprogramming relies on annotations, which are constants.

In the end, I think it's a case of "why not both?". I believe they compliment each other.

For example I would be less worried about potential breaking changes caused by dart-lang/sdk#29277 if runtime asserts inside const constructors was a thing.

@eernstg
Copy link
Member Author

eernstg commented Oct 21, 2022

The reason why I created this proposal is that I thought it would be a rather small, easy thing, and that it could be helpful. A substantial extension of the constant sublanguage could be done in two ways:

  • We could continue using the current approach, where constant expression evaluation is done by a part of the analyzer respectively common front end which is basically a Dart interpreter (for that tiny part of Dart which is constant expressions).
  • We could switch to a radically different model. For example, we could run the program using the normal full Dart semantics, and dump the heap at some point: Whatever is in the heap at that point is "the constant objects"; program execution will then mean that we load that heap and continue from there. This obviously means that the entire language is available for evaluation of "constant expressions".

The former (and current) approach makes every extension of the constant sublanguage expensive. The latter approach is a quite radical change, with many unknowns. In both cases, a radical enhancement of the constant sublanguage is likely to be quite expensive.

@Hixie
Copy link

Hixie commented Oct 21, 2022

we could run the program using the normal full Dart semantics, and dump the heap at some point

There would need to be some limits, e.g. anything that does I/O or has observable side-effects (other than memory allocation) would not work in a world where the heap is dumped.

In terms of complexity, I think expanding the scope of const expressions would definitely be make the language harder.

It literally doesn't affect the difficulty of the language. The language already has const expressions and already has const expression limits. This just changes the limits. It doesn't add or remove anything to the language from a cognitive load perspective.

Writing asserts is fairly niche, mostly done by package authors or more experienced developers. I don't see a newcomer writing asserts.

Maybe or maybe not, but in either case they have to read asserts (e.g. when stepping through Flutter code in the debugger). This means if we add new syntax it's something they have to know to be productive and comfortable, which they don't have to know today.

@leafpetersen
Copy link
Member

On the other hand, extending the const language is intuitive as it only reduces what the developer needs to know (because it pushes the limits of the language further out).

I actually personally like this direction, and @kallentu explored this in her last internship. That said, we ran into very real issues with compiling at scale with this, because (IIRC) it means that compiling a file needs access to the source (or source equivalent) for potentially unbounded amount of the transitive deps. @jensjoha and @jakemac53 might have more to day here.

I'm definitely interested in returning to this at some point: it feels to me that from an end user perspective, we're in a bit of an uncanny valley, and I think we could potentially unlock a lot of exciting use cases if we could generalize const to make it much more first class. An analogy might be that in the same way that we want to provide static introspection using macros (to help make up for the lack of runtime introspection), it could be valuable to provide a compile time eval (to help make up for the lack of a runtime eval). But there are definitely some known implementation challenges that we would need to have answers for.

@jensjoha
Copy link

we ran into very real issues with compiling at scale with this, because (IIRC) it means that compiling a file needs access to the source (or source equivalent) for potentially unbounded amount of the transitive deps. @jensjoha and @jakemac53 might have more to day here.

Yeah, probably compiling from outlines would no longer work because we'll need all data available because be might "execute" it.
Probably modular compilation no longer really works, because any change anywhere could change any constant (and thus invalidate everything).

As I recall there was also an issue about time --- can this make compilation arbitrarily slow? Or does it give up evaluating at some point? In which case, when?
When it's just slow (but does finish), how do we deal with all the bug reports that will come in about compilation being slow?

Also, we will inevitably have to debug such an execution --- how do we do that? Would we have to build a debugger on top of it? And what about profiling?

(Probably there's things I've forgotten or we haven't thought of yet --- it seems to me that having unrestricted constant evaluation is very expensive).

@jakemac53
Copy link
Contributor

Fwiw VM builds (and thus most flutter builds) already can't use outlines internally. We do lose out on better invalidation because of that, but I believe that web is the only platform that can actually use outlines currently for modular compilation.

It is also true that it would mean arbitrarily slow compilation if user code is running (or potentially we limit it but then we will just get people asking us to increase the limit). I am not really concerned about that (you can write bad runtime code too, and we don't disallow while loops etc just because they can run infinitely). I do think we probably would want to be able to inform users about how much time in the compile was spent in user code, but we would want to do a similar thing for macros anyways.

@jensjoha
Copy link

Fwiw VM builds (and thus most flutter builds) already can't use outlines internally. We do lose out on better invalidation because of that, but I believe that web is the only platform that can actually use outlines currently for modular compilation.

While true, we should probably try to move in a direction that makes it possible for the VM instead of in a direction that makes it impossible for everything. As for big apps - internally at least - I also think the web part is very dominant.

It is also true that it would mean arbitrarily slow compilation if user code is running (or potentially we limit it but then we will just get people asking us to increase the limit). I am not really concerned about that (you can write bad runtime code too, and we don't disallow while loops etc just because they can run infinitely).

To me that's not a fair comparison. It's often very clear when the runtime code is slow, that it's the runtime and that the user's own code is at fault. There are also well built out methods for debugging it in that we have a debugger, profiling options etc. None of that will be true if it's in the middle of the compilation pipeline. Some of it could perhaps be build but will in itself be a major undertaking.

I do think we probably would want to be able to inform users about how much time in the compile was spent in user code, but we would want to do a similar thing for macros anyways.

For macros there's a relatively big barrier for creating such a thing, which has been used before for making the argument that that's probably not something everyone will do, but something only the few will do and everyone else will just use one of a few packages that provides a macro. (In fact we'll probably get big issues If that doesn't hold and suddenly everyone writes lots of macros that will then have to be compiled separately, recompiled often, run lots of times etc).
The same barrier doesn't exist for constants --- it's very easy to write const foo = someMethodCall(); and suddenly everyone will do that everywhere. As such I don't think it's fair to compare the two scenarios.

@Hixie
Copy link

Hixie commented Oct 25, 2022

Lots of people will use macros, which is what costs compile time.

I think it makes perfect sense to have a timeout and if compiling macros or constants takes more than 100ms, then the compiler starts a timer. Indeed the same could be said for compiling regular code. We should be upfront about how long compiling takes.

Compiling...
  3213ms spent compiling.
  293ms spent executing macros.
  4329ms spent compiling compile-time constants so far... /

@jensjoha
Copy link

Lots of people will use macros, which is what costs compile time.

Yes and no.
Compiling the macro will take time. Recompiling the macro (when or if needed) will take time. Launching a VM or isolate with the newly compiled macro will take time. If everyone only uses a provided macro it (at least theoretically) can be compiled once, loaded in the background, just be ready and basically never have to be recompiled. If everyone writes their own - especially if mixed in with their normal code - it will take a lot of time because it will have to be recompiled, relaunched etc all the time. That scenario will basically always be the case for constants (that can run arbitrary code) though. Everyone will write them and they will be mixed in with normal code that is edited all the time, causing everything to be invalidated all the time.

And then there's the thing of the few who write macros dealing with debugging stuff that runs in the middle of the compiler, versus everyone (using constants) dealing with debugging stuff that runs in the middle of the compiler.

@jakemac53
Copy link
Contributor

Yes I agree with @jensjoha that the approach we are using right now for running macros would not be viable for evaluating arbitrary constant code. Macros cannot be used from the same library they are defined in either, because of the approach used, so that alone I think would be technically incompatible anyways.

I think we would need to have a full interpreter implementation in order to support evaluating arbitrary constants at compile time.

@Hixie
Copy link

Hixie commented Oct 25, 2022

I don't understand the macro thing. Yes, you have to compile a macro, but you also have to run the macro, right? And running it (i.e. using it) could take arbitrarily long amounts of time. Just like compiling a constant. Both are running "user code" (in the first case code that lives in another package, in the second case code that lives in this package). In both cases the result can be cached unless the calling code or called code changes. Can you elaborate on how they are different?

@jensjoha
Copy link

The actually "running" it can certainly be viewed as similar. To me there are (at least) three main differences though:

  1. How often it is done. I would certainly expect applications of macros to be less frequent than instances of const and grow less over time in a code base.
  2. Who writes it and how careful you are when you do so. My understanding is, that the expectation is, that not very many will write a macro and that the ones doing so probably aren't novices.
    The ones doing it are probably better equipped to write code that's not going to be arbitrarily slow, debug it if it is, or get penalized (as in the package not being very popular) if it is.
    On the other hand everyone - novices included - will likely add const in front of arbitrary stuff, they will add it in front of more and more stuff, and stuff will gradually become slower and slower (or get into an infinite loop they can't debug).
  3. How it's run. Macros are compiled "as normal" and executed on the VM. Likely that means debugging it, profiling it, testing it will be mostly free as it's "just" a question of faking the communication, all tools already exists, are probably known to the engineer etc. For constants, as Jake said, we'll have to write an interpreter and we'll have no tools available for debugging and profiling etc (unless we write that as well) --- all of these things will be major undertakings though.

I'm not saying that the problems don't exist for macros - they certainly do - but they seem significantly smaller (or limited in scope) to me.

@Hixie
Copy link

Hixie commented Oct 26, 2022

I expect every widget in Flutter will eventually be an application of a macro. Maybe every render object and element as well. I would not be surprised if they became more common than complicated const expressions.

I would caution against assuming features are only used by experts.

We would definitely have to make const expressions debuggable. I'm not sure I understand why they'd be different than macros in that respect? They're both code that you run during compilation, and then you get the result and put it in the output, basically. I don't mean to trivialize the complexity; they are both hugely difficult problems. I'm just not sure why they would need to be so fundamentally different.

@jensjoha
Copy link

I expect every widget in Flutter will eventually be an application of a macro. Maybe every render object and element as well. I would not be surprised if they became more common than complicated const expressions.

I would caution against assuming features are only used by experts.

Could we get more info on this? As far as I remember this has not been the expectation. /cc @johnniwinther @jakemac53

As for the expert part the question is what you mean by "use". Apply, sure. But write themselves, again the assumption I've heard has been that not many will do that. (and having, say, 100 different macros in play at any one time will probably kill performance completely even in the ideal case where they're well written and fast).

I'm just not sure why they would need to be so fundamentally different.

To me they are because one is run on the VM which already has a debugger, a profiler etc. It's even stand-alone code that you can test as you would any other code/standalone script. On the other hand the other runs on an interpreter (that currently does not exist and where there is no debugger and no profiler).

@jakemac53
Copy link
Contributor

Regarding "expert" users, the feature is definitely not assuming all macro authors will be experts. But I do expect a small number of "experts" to crop up, and release widely used and well behaved macros. I also expect those macros to make up the vast majority of macro applications in practice. Essentially, I do assume that the number of macro definitions is going to be far lower than the number of macro applications, by multiple orders of magnitude. I think if you look at any language that has macros today that is true. And if you look at "builders" in Dart today, or pub "transformers" in the past, that is also true.

For constants, every constant expression would be like its own "program", and the only reasonable way to execute that would be with an interpreter. There will be far more constant expressions than macro definitions.

@munificent
Copy link
Member

I don't understand the macro thing. Yes, you have to compile a macro, but you also have to run the macro, right? And running it (i.e. using it) could take arbitrarily long amounts of time. Just like compiling a constant. Both are running "user code" (in the first case code that lives in another package, in the second case code that lives in this package). In both cases the result can be cached unless the calling code or called code changes. Can you elaborate on how they are different?

Another big difference is that the environment that a macro runs in is fundamentally different from the environment a constant is evaluated in. When you have a const expression in some library, it is evaluated inside the normal name and type context of that library. It can refer to other constants and values in that library. The other libraries that the library imports are available to the constant, as are their values.

A macro runs in a lifted meta-environment. A macro in library "foo" doesn't have access to any of the constant values in foo. It can't instantiate classes defined in foo (or imported by it). The runtime environment of the macro is an isolated macro execution environment, and the only "foo" that the macro has access to is a reflective procedural API to introspect on the library. The library doesn't actually exist as a thing containing code you can run and values you can access yet.

They really are apples and oranges.

@Hixie
Copy link

Hixie commented Oct 27, 2022

I don't follow the difference here. Sure, they have different scopes. But so what? Why would we need to execute one in a VM environment but the other in an interpreter environment? Why does it matter that the constant sees one library or another library?

@leafpetersen
Copy link
Member

One of the key differences is that macro code is segregated. You define a macro in a library which is specifically a macro library. It has its own set of transitive deps and those transitive deps are independent of the transitive deps of the program you are compiling. You can think of it (and in fact it could, I believe, be implemented) as an entirely separate program that gets compiled separately, and simply invoked by the compiler during compilation to answer queries via a limited introspection API. The key point here is that neither compiling nor invoking a macro requires the full transitive deps of the program in which it is applied.

Compare this to unbounded constant evaluation. If I'm evaluating a constant in a file, it can, in principle, cause evaluation of code from any part of the transitive deps. Therefore, compiling any single file requires the entire transitive closure to be available. This is a scalability issue.

A secondary and related issue is granularity. A macro has a delineated set of sources required to run it. For every macro, you find those sources, and you compile it, once. From then on, running it should be fast - no matter how many applications of the macro you have.

Compare that to arbitrary constant evaluation, where every time you encounter a constant to evaluate, you must start from scratch, doing a demand driven walk to collect up the appropriate pieces of source code and then evaluate them to a constant.

@Hixie
Copy link

Hixie commented Oct 27, 2022

Could constant evaluation be done near the end of the compilation phase, once the program is otherwise entirely ready? You'd have to separate the compilation into phases of things that don't depend on having constants evaluated vs things that do require constants to be evaluated, but essentially you'd have the same scalability solution as macros: compile the program once, evaluate all the constants, inline the results into the code, run the final set of optimisations and checks that depend on having those values.

@jensjoha
Copy link

It seems to me you're thinking of a "one shot compile" where you start from nothing, compile everything, and then you're done.

But say you want to compile a big app internally. Say it's split into 1000 modules or whatever we call it. For a ddc web compile for instance, this is (simplified) what happens :

  • An outline is compiled for each of the 1000 modules.
  • A non-outline is compiled for each of the 1000 modules given outlines for all dependent modules as input.
  • A javascript output is created for each of the 1000 modules given the non-outline version of that module and the outline version of all dependent modules.

In total 3000 actions.

Say, now, the body of some method changes in some file and we recompile:

  • The outline for that 1 module is recompiled. It doesn't change, the input for all other outline compiles would thus not change and wouldn't have to be redone.
  • The non-outline for that 1 module is recompiled. It is given the old outlines as input (as they are still up to date).
  • A javascript output is created for that 1 module with input as before.

In total 3 actions.

With macros none of that really changes (assuming the macro doesn't change). You apply the macro at the first two steps but otherwise nothing changes. The macro can't see into bodies.

With constant evaluation being able to evaluate everything, though, when the body of some method changes:

  • The outline for that 1 module is recompiled.
  • The non-outline for that 1 module is recompiled.
  • Everything that (transitively) depends on this 1 module needs the non-outline as input because constant evaluation can execute the body of stuff. The non-outline version has changed and all of those (possible 999) modules will have to be recompiled (in both outline and non-outline versions, although, outline versions doesn't make sense anymore as they're not given as input to anything anyway).
  • Javascript will have to be compiled for any non-outline that changed (which could be all 1000 them, though might just be 1).

In total up to 3000 actions. Even if the non-outline didn't change for the 999 it could still be 2001 actions (for the right/wrong (depending on how you see it) change).

Now, while I believe we already have this issue for VM compiles because of the way the mixin transformation works (and possibly how the ffi transformation works, I might forget others, I'm not 100% sure) we shouldn't make the problem worse. (Possibly the vm stuff could be fixed by enforcing the marking of stuff that can be mixed in and ffi stuff which can then be included in the "outline" in which case changes to bodies of non-mixin and non-ffi stuff wouldn't have this behaviour) (also, I don't think we have 'big apps' internally in the same way with the vm but others will certainly have more knowledge than me here).

For that matter, recompiling on hot reloads in flutter utilizes the same trick even though everything is in memory: If only bodies change (ignoring mixins and ffi in this example) only the changed file will be recompiled because the changed body can't change anything else. If suddenly that body can be executed by constant evaluation that is no longer true.

@Hixie
Copy link

Hixie commented Oct 27, 2022

I think there's scope for a lot of optimization here (constant evaluation can track which symbols it transitively depends on, for example), but yes, I don't disagree that it's a lot of work and potentially expensive to compile. The whole point is to move work from runtime (where the entire planet experiences it) to build time (where only developers experience it).

I don't think it need affect the development experience much either, since in practice we could, for example, defer constant evaluation until runtime in debug mode, or parallelize the compilation and execution so that the code starts running before we're done evaluating the constants, etc. It's only release builds that would be noticeably more expensive.

But yes. I'm 100% in agreement that this is difficult and expensive. I don't think that means we shouldn't do it.

@jakemac53
Copy link
Contributor

Fwiw, having an interpreter implementation would likely give us a nicer/better way to run macros as well, without requiring inter-process communication. So there would be some benefit there. We could also enable some other cool non-macro based metaprogramming features (in fact, I tried to go that direction instead of macros previously, but we strayed away from it due to both the dev time cost and implementation cost). They could be very nicely complimenting features though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhanced-const Requests or proposals about enhanced constant expressions small-feature A small feature which is relatively cheap to implement.
Projects
None yet
Development

No branches or pull requests

8 participants