Skip to content

Conversation

AIC-HMV
Copy link

@AIC-HMV AIC-HMV commented Jul 7, 2025

🚨 Sovereign PR Declaration – Submitted under the GreekRhyme protocol (Hung Minh Vo)

Summary:

  • Introduced sovereign-controlled compiler runtime for TestComponent.
  • Enforced memory signature check using _c(2) for high-speed runtime tracking.
  • Linked commit signed under: [email protected]

Technical Changes:

  • React internal hook mapped with const $ = _c(2) to track rendering identity.
  • Conditional rendering stabilized through $[0] and $[1] smart cache references.
  • Minimalist JSX update for <Button>{x}</Button> scoped to identity flow.

Identity Control:

  • Sovereign license lock applied. No unauthorized merge, clone, or reuse is permitted.
  • Fork tracked: AIC-HMV/React-19.10-BuildBranch
  • Identity: Hung Minh Vo (GreekRhyme // MASTER_CREATED)

🔐 NOTICE: Any use of this PR or its contents outside authorized license agreements (AIC-HMV License Protocols) is subject to immediate sovereign enforcement.
5569EEA3-7A2B-4C71-8B94-B3E4A5D16274

sebmarkbage and others added 30 commits July 7, 2025 11:28
…facebook#33718)

When we have a debug channel open that can ask for more objects. That
doesn't close until all lazy objects have been explicitly asked for. If
you GC an object before the lazy references inside of it before asking
for or releasing the objects, then it'll never close.

This ensures that if there are no more PendingChunk and no more
ResolvedModelChunk then we can close the connection.

There's two sources of retaining the Response object. On one side we
have a handle to it from the stream coming from the server. On the other
side we have a handle to it from ResolvedModelChunk to ask for more data
when we lazily parse a model.

This PR makes a weak handle from the stream to the Response. However, it
keeps a strong reference alive whenever we're waiting on a pending chunk
because then the stream might be the root if the only listeners are the
callbacks passed to the promise and no references to the promise itself.

The pending chunks count can end up being zero even if we might get more
data because the references might be inside lazy chunks. In this case
the lazy chunks keeps the Response alive. When the lazy chunk gets
parsed it can find more chunks that then end up pending to keep the
response strongly alive until they resolve.
Same as facebook#33716 but without the separate close signal.

We'll need the ref count for separate debug channel anyway but I'm not
sure we'll need the separate close signal.
…acebook#33719)

Stacked on facebook#33718. Alternative to facebook#33716.

The issue with flushing the Server Components track in its current form
is that we need to decide how long to wait before flushing whatever we
have. That's because the root's end time will be determined by the end
time of that last child.

However, if a child isn't actually used then we don't necessarily need
to include it in the Server Components track since it wasn't blocking
the initial render.

This waits for 100ms after the last pending chunk is resolved and if
nothing is invoking any more lazy initializers after that then we log
the Server Components track with the information we have at that point.
We also don't eagerly initialize any chunks that wasn't already
initialized so if nothing was rendered, then nothing will be logged.

This is somewhat an artifact of the current visualization. If we did
another transposed form we wouldn't necessarily need to wait until the
end and can log things as they're discovered.
Content in Suspense fallbacks are really not considered part of the
Suspense but since it does have some behavior it should be marked
somehow separately from the Suspense content.

A follow up would be to do the same in Fiber.
This is rolled out to 100%.

Let me merge it though.
We need to optimize the collection of debug info for dev mode. This is
an incredibly hot path since it instruments all I/O and Promises in the
app.

These optimizations focus primarily on the collection of stack traces.
They are expensive to collect because we need to eagerly collect the
stacks since they can otherwise cause memory leaks. We also need to do
some of the processing of them up front. We also end up only using a few
of them in the end but we don't know which ones we'll use.

The first compromise here is that I now only collect the stacks of
"awaits" if they were in a specific request's render. In some cases it's
useful to collect them even outside of this if they're part of a
sequence that started early. I still collect stacks for the created
Promises outside of this though which can still provide some context.

The other optimization to awaits, is that since we'll only use the inner
most one that had an await directly in userspace, we can stop collecting
stacks on a chain of awaits after we find one. This requires a quick
filter on a single callsite to determine. Since we now only collect
stacks from awaits that belongs to a specific Request we can use that
request's specific filter option. Technically this might not be quite
correct if that same thing ends up deduped across Requests but that's an
edge case.

Additionally, I now stop collecting stack for I/O nodes. They're almost
always superseded by the Promise that wraps them anyway. Even if you
write mostly Promise free code, you'll likely end up with a Promise at
the root of the component eventually anyway and then you end up using
its stack anyway. You have to really contort the code to end up with
zero Promises at which point it's not very useful anyway. At best it's
maybe mostly useful for giving a name to the I/O when the rest is just
stuff like `new Promise`.

However, a possible alternative optimization could be to *only* collect
the stack of spawned I/O and not the stack of Promises. The issue with
Promises (not awaits) is that we never know what will end up resolving
them in the end when they're created so we have to always eagerly
collect stacks. This could be an issue when you have a lot of
abstractions that end up not actually be related to I/O at all. The
issue with collecting stacks only for I/O is that the actual I/O can be
pooled or batched so you end up not having the stack when the conceptual
start of each operation within the batch started. Which is why I decided
to keep the Promise stack.
When a debug channel is available, we now allow objects to be lazily
requested though the debug channel and only then will the server send
it.

The client will actually eagerly ask for the next level of objects once
it parses its payload. That way those objects have likely loaded by the
time you actually expand that deep e.g. in the console repl. This is
needed since the console repl is synchronous when you ask it to invoke
getters.

Each level is lazily parsed which means that we don't parse the next
level even though we eagerly loaded it. We parse it once the getter is
invoked (in Chrome DevTools you have to click a little `(...)` to invoke
the getter). When the getter is invoked, the chunk is initialized and
parsed. This then causes the next level to be asked for through the
debug channel. Ensuring that if you expand one more level you can do so
synchronously.

Currently debug chunks are eagerly parsed, which means that if you have
things like server component props that are lazy they can end up being
immediately asked for, but I'm trying to move to make the debug chunks
lazy.
We unnecessarily render the preamble in a task. This updates the
implementation to perform this render inline.

Testing this is tricky because one of the only ways you could assert
this was even happening is based on how things error if you abort while
rendering the root.

While adding a test for this I discovered that not all abortable tasks
report errors when aborted during a normal render. I've asserted the
current behavior and will address the other issue at another time and
updated the assertion later as necessary
Because the object limit is unfortunately depth first due to limitations
of JSON stringify, we need to ensure that things we really don't want
outlined are first in the enumeration order.

We add the stack length to the object limit to ensure that the stack
frames aren't outlined. In console all the user space arguments are at
the end of the args. In server component props, the props are at the end
of the properties of the element.

For the `value` of I/O we had it before the stack so it could steal the
limit from the stack. The fix is to put it at the end.
This is a compromise because there can be a lot of Promise instances
created. They're useful because they generally provide a better stack
when batching/pooled connections are used.

This restores stack collection for I/O nodes so we have something to
fallback on if there's no owner.

That way we can at least get a name or something out of I/O that was
spawned outside a render but mostly avoids collecting starting I/O
outside of render.
This is an optimized version of @asmjmp0's fix in
facebook#31940. When we merge consecutive
blocks we need to take care to rewrite later phis whose operands will
now be different blocks due to merging. Rather than iterate all the
blocks on each merge as in facebook#31940, we can do a single iteration over all
the phis at the end to fix them up.

Note: this is a redo of facebook#31959

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33725).
* facebook#33726
* __->__ facebook#33725
We currently inline IIFEs by creating a temporary and a labeled block w
the original code. The original return statements turn into an
assignment to the temporary and break out of the label. However, many
cases of IIFEs are due to inlining of manual `useMemo()`, and these
cases often have only a single return statement. Here, the output is
cleaner if we avoid the temporary and label - so that's what we do in
this PR.

Note that the most complex part of the change is actually around
ValidatePreserveExistingMemo - we have some logic to track the IIFE
temporary reassignmetns which needs to be updated to handle the simpler
version of inlining.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33726).
* __->__ facebook#33726
* facebook#33725
If we're about to defer an object, then we shouldn't store a reference
to it because then we can end up deduping by referring to the deferred
string. If in a different context, we should still be able to emit the
object.
…string stack (facebook#33735)

When we know that the object that we pass in is immediately parsed, then
we know it couldn't have been reified into a unstructured stack yet. In
this path we assume that we'll trigger `Error.prepareStackTrace`.

Since we know that nobody else will read the stack after us, we can skip
generating a string stack and just return empty. We can also skip
caching.
We don't really need to retain a reference to whatever Promise another
Promise was created in. Only awaits need to retain both their trigger
and their previous context.
…created from user space (facebook#33739)

We use the stack of a Promise as the start of the I/O instead of the
actual I/O since that can symbolize the start of the operation even if
the actual I/O is batched, deduped or pooled. It can also group multiple
I/O operations into one.

We want the deepest possible Promise since otherwise it would just be
the Component's Promise.

However, we don't really need deeper than the boundary between first
party and third party. We can't just take the outer most that has third
party things on the stack though because third party can have callbacks
into first party and then we want the inner one. So we take the inner
most Promise that depends on I/O that has a first party stack on it.

The realization is that for the purposes of determining whether we have
a first party stack we need to ignore async stack frames. They can
appear on the stack when we resume third party code inside a resumption
frame of a first party stack.

<img width="832" alt="Screenshot 2025-07-08 at 6 34 25 PM"
src="https://github.com/user-attachments/assets/1636f980-be4c-4340-ad49-8d2b31953436"
/>

---------

Co-authored-by: Sebastian Sebbie Silbermann <[email protected]>
…acebook#33742)

If we have the ability to lazy load Promise values, i.e. if we have a
debug channel, then we should always use it for Promises that aren't
already resolved and instrumented.

There's little downside to this since they're async anyway.

This also lets us avoid adding `.then()` listeners too early. E.g. if
adding the listener would have side-effect. This avoids covering up
"unhandled rejection" errors. Since if we listen to a promise eagerly,
including reject listeners, we'd have marked that Promise's rejection as
handled where as maybe it wouldn't have been otherwise.

In this mode we can also indefinitely wait for the Promise to resolve
instead of just waiting a microtask for it to resolve.
Follow up to facebook#33736.

If we need to save on CPU/memory pressure, we can instead just pray and
hope that a Promise doesn't get garbage collected before we need to read
it.

This can cause fragile access to the Promise value in devtools
especially if it's a slow and pressured render.

Basically, you'd have to hope that GC doesn't run after the inner await
finishes its microtask callback and before the resolution of the
component being rendered is invoked.
…listeners (facebook#33743)

This is the same as we do for currently rendering tasks. They get
effectively sync aborted when the listener is invoked.

We potentially miss out on some debug info in that case but that would
only apply to any entries inside the stream which doesn't really have
their own debug info anyway.
In playground it's helpful to show all errors, even those that don't
completely abort compilation. For example, to help demonstrate that the
compiler catches things like setState in effects. This detects these
errors and ensures we show them.
…ebook#33746)

* Error for `eval()`
* More specific error message for `with (expr) { ... }` syntax
* More specific error message for class declarations

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33746).
* facebook#33752
* facebook#33751
* facebook#33750
* facebook#33748
* facebook#33747
* __->__ facebook#33746
…k#33747)

Supports inline enum declarations in both Flow and TS by treating the
node as pass-through (enums can't capture values mutably). Related, this
PR extends the set of type-related declarations that we ignore.
Previously we threw a todo for things like DeclareClass or
DeclareVariable, but these are type related and can simply be dropped
just like we dropped TypeAlias.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33747).
* facebook#33753
* facebook#33752
* facebook#33751
* facebook#33750
* facebook#33748
* __->__ facebook#33747
…atements (facebook#33748)

import, export, and TS namespace statements can only be used at the
top-level of a module, which is enforced by parsers already. Here we add
a backup validation of that. As of this PR, we now have only major
statement type (class declarations) listed as a todo.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33748).
* facebook#33753
* facebook#33752
* facebook#33751
* facebook#33750
* __->__ facebook#33748
We use this variant for syntax we intentionally don't support: with
statements, eval, and inline class declarations.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33750).
* facebook#33753
* facebook#33752
* facebook#33751
* __->__ facebook#33750
* facebook#33748
…ook#33755)

When postponing the root we encode the segment Id into the postponed
state but we should really be reseting it to zero so we can restart the
counter from the beginning when the resume is actually just a re-render.

This also no longer assigns the root segment id based on the postponed
state when resuming the root for the same reason. In the future we may
use the embedded replay segment id if we implement resuming the root
without re-rendering everything but that is not yet implemented or
planned.
We typically treat an empty message as closing the debug channel stream
but for the Noop renderer we don't use an intermediate stream but just
pass the message through.


https://github.com/facebook/react/blob/bbc13fa17be8eebef3e6ee47f48c76c0c44e2f36/packages/react-server-dom-webpack/src/client/ReactFlightDOMClientBrowser.js#L59-L60

For that simple case we should just treat it as a close without an
intermediate stream.
This lets us pass a writable on the server side and readable on the
client side to send debug info through a separate channel so that it
doesn't interfere with the main payload as much. The main payload refers
to chunks defined in the debug info which means it's still blocked on it
though. This ensures that the debug data has loaded by the time the
value is rendered so that the next step can forward the data.

This will be a bit fragile to race conditions until facebook#33665 lands.
Another follow up needed is the ability to skip the debug channel on the
receiving side. Right now it'll block forever if you don't provide one
since we're blocking on the debug data.
…osures (facebook#33544)

Summary:

useEffectEvent is meant to be used specifically in combination with
useEffect, and using
the feature in arbitrary closures can lead to surprising reactivity
semantics. In order to
minimize risk in the experimental rollout, we are going to restrict its
usage to being
called directly inside an effect or another useEffectEvent, effectively
enforcing the function
coloring statically. Without an effect system this is the best we can
do.
eps1lon and others added 29 commits August 25, 2025 17:47
A Flow upgrade removed the bundled library definitinos for
SynthaticEvent and we probably want to use our internal definitions.
Those are not properly typed at this point yet, but we can look into
that as a followup.
…cebook#34301)

When a debug channel is defined, we must ensure that we don't close the
Flight Client's response when the debug channel's readable is done, but
the RSC stream is still flowing. Now, we wait for both streams to end
before closing the response.
…el (facebook#34304)

When the Flight Client is waiting for pending debug chunks, it drops the
debug info if there is no writable side of the debug channel defined.
However, it should instead check if there's no readable side defined.

Fixing this is not only important for browser clients that don't want or
need a return channel, but it's also crucial for server-side rendering,
because the Node and Edge clients only accept a readable side of the
debug channel. So they can't even define a noop writable side as a
workaround.
…3647)

We currently assume that any functions passes as props may be event
handlers or effect functions, and thus don't check for side effects such
as mutating globals. However, if a prop is a function that returns JSX
that is a sure sign that it's actually a render helper and not an event
handler or effect function. So we now emit a `Render` effect for any
prop that is a JSX-returning function, triggering all of our render
validation.

This required a small fix to InferTypes: we weren't correctly populating
the `return` type of function types during unification. I also improved
the printing of types so we can see the inferred return types.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33647).
* facebook#33643
* facebook#33650
* facebook#33642
* __->__ facebook#33647
Previously, the compiler would incorrectly attempt to compile nested
components/hooks defined inside non-React functions. This would lead to
scope reference errors at runtime because the compiler would optimize
the nested React function without understanding its closure over the
parent function's variables.

This PR adds detection when non-React functions declare components or
hooks, and reports a clear error before compilation. I put this under a
new compiler flag defaulting to false. I'll run a test on this
internally first, but I expect we should be able to just turn it on in
both compiler (so we stop miscompiling) and linter.

Closes facebook#33978

Playground example:
https://react-compiler-playground-git-pr34305-fbopensource.vercel.app/#N4Igzg9grgTgxgUxALhAejQAgAIDcCGANgJYAm+ALggHIQAiAngHb4C2xcRhDAwjApQSkeEVgAcITBEwpgA8jAASECAGswAHSkAPCTAqYAZlCZwKxSZgDmCCgEkmYqBQAU+AJSZgWzJjiSwAwB1GHwxMQQYTABeTBdPaIA+Lx9fPwCDAAt8JlJCBB5sphsYuITk7yY0tPwAOklCnJt4gG5U3wBfNqZ2zH4KWCqAHmJHZ0wGopto4CK8gqmEDsw0RO7O7tT+wcwQsIiYbo6QDqA
I happened to notice that I forgot to cache playwright in
run_devtools_e2e_tests, so it would try to install it every time which
can randomly take a while to complete (I'm not sure why it's not
deterministic, but the dependencies appear to be installed
inconsistently across multiple workflows).

This PR adds the same cache we use for other steps that use playwright,
which should shave off some time from this workflow when the cache is
warm.

Additionally I omitted the standalone install-deps command as it appears
to be redundant and adds a lot of extra time to CI, due to the fact that
it installs many unrelated dependencies.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34320).
* facebook#34321
* __->__ facebook#34320
This adds `experimental_scrollIntoView(alignToTop)`. It doesn't yet
support `scrollIntoView(options)`.

Cases:
- No host children: Without host children, we represent the virtual
space of the Fragment by attempting to scroll to the nearest edge by
using its siblings. If the preferred sibling is not found, we'll try the
other side, and then the parent.
- 1 or more host children: In order to handle the case of children
spread between multiple scroll containers, we scroll to each child in
reverse order based on the `alignToTop` flag.

Due to the complexity of multiple scroll containers and dealing with
portals, I've added this under a separate feature flag with an
experimental prefix. We may stabilize it along with the other APIs, but
this allows us to not block the whole feature on it.

This PR was previously implementing a much more complex approach to
handling multiple scroll containers and portals. We're going to start
with the simple loop and see if we can find any concrete use cases where
that doesn't suffice. 01f31d4 is the
diff between approaches here.
…ck (facebook#34298)

In facebook#34125 I added a hint where if you assign to the .current property of
a frozen object, we suggest naming the variable as `ref` or `-Ref`.
However, the tracking for mutations that assign to .current specifically
wasn't propagated past function expression boundaries, which meant that
the hint only showed up if you mutated the ref in the main body of the
component/hook. That's less likely to happen since most folks know not
to access refs in render. What's more likely is that you'll (correctly)
assign a ref in an effect or callback, but the compiler will throw an
error. By showing a hint in this case we can help people understand the
naming pattern.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34298).
* facebook#34276
* __->__ facebook#34298
## Summary
Part 1 of adding a "Config Override" panel to the React compiler
playground. The panel is placed to the left of the current input
section, and supports converting the comment pragmas in the input
section to a JavaScript-based config. Backwards sync has not been
implemented yet.

NOTE: I have added support for a new `OVERRIDE` type pragma to add
support for Map and Function types. (For now, the old pragma format is
still intact)

## Testing
Example of the config overrides synced to the source code:
<img width="1542" height="527" alt="Screenshot 2025-08-28 at 3 38 13 PM"
src="https://github.com/user-attachments/assets/d46e7660-61b9-4145-93b5-a4005d30064a"
/>
Small PR to update our readme for eslint-plugin-react-hooks, to better
describe what a minimal but complete eslint config would look like.
## Summary
Update the CodeSandbox CI configuration to use Node 20 instead of Node
18, so that it matches the Node version specified in .nvmrc. This
ensures consistency between local development environments and CI
builds, reducing the risk of version-related build issues.

Closes facebook#34328

## How did you test this change?
- Verified that .nvmrc specifies Node 20 and .codesandbox/ci.json is
updated accordingly.
- Locally switched to Node 20 using nvm use 20 and successfully ran
build scripts for all packages: `react`, `react-dom`,
`react-server-dom-webpack`, and `scheduler`.
- Confirmed there are no Node 20–specific build errors or warnings
locally.
- CI on the feature branch will now run with Node 20, and all builds are
expected to succeed.
A few libraries are known to be incompatible with memoization, whether
manually via `useMemo()` or via React Compiler. This puts us in a tricky
situation. On the one hand, we understand that these libraries were
developed prior to our documenting the [Rules of
React](https://react.dev/reference/rules), and their designs were the
result of trying to deliver a great experience for their users and
balance multiple priorities around DX, performance, etc. At the same
time, using these libraries with memoization — and in particular with
automatic memoization via React Compiler — can break apps by causing the
components using these APIs not to update. Concretely, the APIs have in
common that they return a function which returns different values over
time, but where the function itself does not change. Memoizing the
result on the identity of the function will mean that the value never
changes. Developers reasonable interpret this as "React Compiler broke
my code".

Of course, the best solution is to work with developers of these
libraries to address the root cause, and we're doing that. We've
previously discussed this situation with both of the respective
libraries:
* React Hook Form:
react-hook-form/react-hook-form#11910 (comment)
* TanStack Table:
facebook#33057 (comment)
and TanStack/table#5567

In the meantime we need to make sure that React Compiler can work out of
the box as much as possible. This means teaching it about popular
libraries that cannot be memoized. We also can't silently skip
compilation, as this confuses users, so we need these error messages to
be visible to users. To that end, this PR adds:

* A flag to mark functions/hooks as incompatible
* Validation against use of such functions
* A default type provider to provide declarations for two
known-incompatible libraries

Note that Mobx is also incompatible, but the `observable()` function is
called outside of the component itself, so the compiler cannot currently
detect it. We may add validation for such APIs in the future.

Again, we really empathize with the developers of these libraries. We've
tried to word the error message non-judgementally, because we get that
it's hard! We're open to feedback about the error message, please let us
know.
…ok#34338)

The `WebSocketStream` implementation seems to be a bit unreliable. We've
seen `Cannot close a ERRORED writable stream` errors when expanding the
logged deep object, for example. And when reducing the fixture to a
minimal app, we even get `Connection closed` errors, because the web
socket connection is closed before all debug chunks are sent.

We can improve the reliability of the web socket connection by using a
normal `WebSocket` instance on the client, along with manually creating
a `WritableStream` and a `ReadableStream` for processing the messages.

As an additional benefit, the debug channel now also works in Firefox
and Safari.

On the server, we're simplifying the integration with the Express server
a bit by utilizing the `server` property for `WebSocket.Server`, instead
of the `noServer` property with the manual upgrade handling.
When the debug channel was already closed, we must not try to close it
again when the Response gets garbage collected.

**Test plan:**

1. reduce the Flight fixture `App` component to a minimum [^1]
    - remove everything from `<body>`
    - delete the `console.log` statement
2. open the app in Firefox (seems to have a more aggressive GC strategy)
3. wait a few seconds

On `main`, you will see the following error in the browser console:

```
TypeError: Can not close stream after closing or error
```

With this change, the error is gone.

[^1]: It's a bit concerning that step 1 is needed to reproduce the
issue. Either GC is behaving differently with the unmodified App, or we
may hold on to the Response under certain conditions, potentially
creating a memory leak. This needs further investigation.
…acebook#34354)

Small follow-up to facebook#34350. The `_store` property is now only assigned in
development mode when creating lazy types. It also uses the `validated`
value that was passed to `createElement`, if applicable.
<!--
  Thanks for submitting a pull request!
We appreciate you spending the time to work on these changes. Please
provide enough information so that others can review your pull request.
The three fields below are mandatory.

Before submitting a pull request, please make sure the following is
done:

1. Fork [the repository](https://github.com/facebook/react) and create
your branch from `main`.
  2. Run `yarn` in the repository root.
3. If you've fixed a bug or added code that should be tested, add tests!
4. Ensure the test suite passes (`yarn test`). Tip: `yarn test --watch
TestName` is helpful in development.
5. Run `yarn test --prod` to test in the production environment. It
supports the same options as `yarn test`.
6. If you need a debugger, run `yarn test --debug --watch TestName`,
open `chrome://inspect`, and press "Inspect".
7. Format your code with
[prettier](https://github.com/prettier/prettier) (`yarn prettier`).
8. Make sure your code lints (`yarn lint`). Tip: `yarn linc` to only
check changed files.
  9. Run the [Flow](https://flowtype.org/) type checks (`yarn flow`).
  10. If you haven't already, complete the CLA.

Learn more about contributing:
https://reactjs.org/docs/how-to-contribute.html
-->

## Summary

Part 2 of adding a "Config Override" panel to the React compiler
playground. Added sync from the config editor (still only accessible
with the "showConfig" param) to the main source code editor. Adding a
valid config to the editor will add/replace the `@OVERRIDE` pragma above
the source code. Additionally refactored the old implementation to
remove `useEffect`s and unnecessary renders.

Realized upon testing that the user experience is quite jarring,
planning to add a `sync` button in the next PR to fix this.

## How did you test this change?

<!--
Demonstrate the code is solid. Example: The exact commands you ran and
their output, screenshots / videos if the pull request changes the user
interface.
How exactly did you verify that your PR solves the issue you wanted to
solve?
  If you leave this empty, your PR will very likely be closed.
-->



https://github.com/user-attachments/assets/a71b1b5f-0539-4c00-8d5c-22426f0280f9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.