Skip to content

Added notes on special .write callback behaviour on process.stdout/.stderr #3772

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions doc/api/process.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -286,6 +286,10 @@ event and that writes can block when output is redirected to a file (although
disks are fast and operating systems normally employ write-back caching so it
should be a very rare occurrence indeed.)

Note that on `process.stdout` and `process.stderr`, the callback passed to
`stream.write` might be called before all written data is flushed completely.
This can result in data loss if Node.js is ended prematurely using `process.exit`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incorrect. What you should convey here is that the callback can fire before the receiving end has received all data because it's still in transit in a kernel buffer.

The other caveat is that calling process.exit before callbacks have fired means the data may not have been handed off to the operating system yet. The imperative word here is 'may' - it may or may not have been handed off. It's not until the callback fires that you know for sure.

In case it isn't clear, the flow is node -> kernel -> other, where other is either a tty, a file or another process reading from a pipe. Calling process.exit when pending data is still in node land means the data is irretrievably lost. When it's been handed off to the kernel, it's unspecified what happens - it depends on the operating system and the phase of the moon.


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... perhaps, "Note that on process.stdout and process.stderr, the callback passed to stream.write might be called before all written data is flushed completely. This can result in data loss if Node.js is ended prematurely using process.exit."

To check if Node.js is being run in a TTY context, read the `isTTY` property
on `process.stderr`, `process.stdout`, or `process.stdin`:

Expand Down Expand Up @@ -460,6 +464,10 @@ To exit with a 'failure' code:

The shell that executed Node.js should see the exit code as 1.

Note that Node.js will shutdown as fast as possible. Consumers of `process.stdout`
and `process.stderr` might not get all data even when the `stream.write` callback
suggests different.


## process.exitCode

Expand Down
4 changes: 4 additions & 0 deletions doc/api/stream.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -566,6 +566,10 @@ even if it returns `false`. However, writes will be buffered in
memory, so it is best not to do this excessively. Instead, wait for
the `drain` event before writing more data.

Note that on `process.stdout` and `process.stderr` the callback might
be called before all data has been fully handled. This might result in
data loss if Node.js is ended via `process.exit`.

#### Event: 'drain'

If a [`writable.write(chunk)`][] call returns false, then the `drain`
Expand Down