Skip to content

fix: avoid starting and stopping spinner loops #7131

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 25, 2025

Conversation

serhalp
Copy link
Member

@serhalp serhalp commented Mar 25, 2025

Summary

This was a small regression introduced in #7026.

I didn't quite map the previous lib's API to nanospinner's correctly.

The intent here is to manually control each frame render and each clear. Instead, the spinner here was already started before it's passed in to this method, then it was cleared and started again on each chunk - but start() here would start a new interval loop and set up new node.js exit listeners but clear() wouldn't clear them (reset() does that).

This was leading to unnecessary resource utilization and printing warnings like this:

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGINT listeners added to [process]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit
(Use node --trace-warnings ... to show where the warning was created)
(node:61086) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGTERM listeners added to [process]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit

The fix here is to just manually render a single frame (.spin()) at a time (and stop() the initial loop).

Copy link

github-actions bot commented Mar 25, 2025

📊 Benchmark results

Comparing with db45131

  • Dependency count: 1,173 (no change)
  • Package size: 297 MB ⬇️ 0.00% decrease vs. db45131
  • Number of ts-expect-error directives: 715 (no change)

@serhalp serhalp marked this pull request as ready for review March 25, 2025 20:34
@serhalp serhalp requested a review from a team as a code owner March 25, 2025 20:34
This was a small regression introduced in #7026.

I didn't quite map the previous lib's API to nanospinner's correctly.

The intent here is to manually control each frame render and each clear. Instead, the
spinner here was already started before it's passed in to this method, then it was cleared
and started again on each chunk - but `start()` here would start a new interval loop and
set up new node.js exit listeners but `clear()` wouldn't clear them (`reset()` does that).

This was leading to unnecessary resource utilization and printing warnings like this:

```
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGINT listeners added to [process]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit
(Use node --trace-warnings ... to show where the warning was created)
(node:61086) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 SIGTERM listeners added to [process]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit
```

The fix here is to just manually render a single frame (`.spin()`) at a time (and `stop()`
the initial loop).
@serhalp serhalp force-pushed the fix/fix-nanospinner-listeners branch from 0b1bdf5 to 4adc5b7 Compare March 25, 2025 21:19
@serhalp serhalp enabled auto-merge (squash) March 25, 2025 21:19
@serhalp serhalp merged commit 1002f98 into main Mar 25, 2025
52 checks passed
@serhalp serhalp deleted the fix/fix-nanospinner-listeners branch March 25, 2025 21:35
serhalp added a commit that referenced this pull request Apr 28, 2025
See:
1. #7026
2. #7131
3. #7124

Oops, I was really overthinking this. All this needs to do is, every time a chunk comes in
from the underlying framework dev server, clear the spinner line, write the chunk, then
resume the spinner.

This wasn't set up right at all, which was obfuscated by the erroneous use of `isSpinning`
(see previous commit).
serhalp added a commit that referenced this pull request Apr 29, 2025
See:
1. #7026
2. #7131
3. #7124

Oops, I was really overthinking this. All this needs to do is, every time a chunk comes in
from the underlying framework dev server, clear the spinner line, write the chunk, then
resume the spinner.

This wasn't set up right at all, which was obfuscated by the erroneous use of `isSpinning`
(see previous commit).
serhalp added a commit that referenced this pull request May 1, 2025
See:
1. #7026
2. #7131
3. #7124

Oops, I was really overthinking this. All this needs to do is, every time a chunk comes in
from the underlying framework dev server, clear the spinner line, write the chunk, then
resume the spinner.

This wasn't set up right at all, which was obfuscated by the erroneous use of `isSpinning`
(see previous commit).
serhalp added a commit that referenced this pull request May 1, 2025
See:
1. #7026
2. #7131
3. #7124

Oops, I was really overthinking this. All this needs to do is, every time a chunk comes in
from the underlying framework dev server, clear the spinner line, write the chunk, then
resume the spinner.

This wasn't set up right at all, which was obfuscated by the erroneous use of `isSpinning`
(see previous commit).
serhalp added a commit that referenced this pull request May 1, 2025
…t") (#7242)

* fix: fix framework server loading spinner rendering

This is a regression in the loading spinner shown before "Waiting for framework port", introduced in 19.0.3.

* fix: fix framework server spinner

See:
1. #7026
2. #7131
3. #7124

Oops, I was really overthinking this. All this needs to do is, every time a chunk comes in
from the underlying framework dev server, clear the spinner line, write the chunk, then
resume the spinner.

This wasn't set up right at all, which was obfuscated by the erroneous use of `isSpinning`
(see previous commit).
serhalp added a commit that referenced this pull request May 1, 2025
* fix: fix framework server loading spinner rendering

This is a regression in the loading spinner shown before "Waiting for framework port", introduced in 19.0.3.

* fix: fix framework server spinner

See:
1. #7026
2. #7131
3. #7124

Oops, I was really overthinking this. All this needs to do is, every time a chunk comes in
from the underlying framework dev server, clear the spinner line, write the chunk, then
resume the spinner.

This wasn't set up right at all, which was obfuscated by the erroneous use of `isSpinning`
(see previous commit).

* fix: remove "Netlify Dev" prefix from deploy log

* fix(dev): polish and improve dev command

- Reworded messages for clarity and consistency
  - I think "Netlify Dev" was a vestige of Netlify CLI's predecessor...? In any case, I
    dropped this wording.
  - e.g. Starting Netlify Dev with Astro → Starting Astro dev server
  - e.g. Waiting for framework port 4321 → Waiting for Astro dev server to be ready on port 4321
- Removed the "🔸 Netlify Dev 🔸" first line (doesn't seem to add anything)
- Attempted to address the common confusion around the two printed URLs (our dev server +
  framework server):
  - more distinctive box
  - 'inverted' colour on the URL (cyan background) to make it stand out the most
  - changed `Server now ready on http...` to `Local dev server ready: http...`
- Attempted to address the common confusion around this message: `✔ Waiting for framework
  port 4321. This can be configured using the 'targetPort' property in the netlify.toml`
  - (Users often think this is an error message. I don't blame them.)
  - I changed this to `Waiting for Astro dev server to be ready on port 4321` (where Astro is the framework name)
  - I added logic so that after 5 seconds of waiting the message changes to `Still waiting
    for server on port 4321 to be ready. Are you sure this is the correct port for your
    Astro site? Change this with the targetPort option in your netlify.toml.`.
- Collapse "Injected env var" messages into a single line per source
- ◈ → ⬥

* fix: revert the framework server log prefix for now
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants