-
-
Notifications
You must be signed in to change notification settings - Fork 31.7k
stream: pipeline should error if any stream is destroyed #36674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I would like to work on this problem |
I also bumped into this. Http client can easily cause this to http server which is using pipeline (this could cause really bad things): let { PassThrough, pipeline } = require("stream");
let http = require("http");
let server = http.createServer(async function(req, res)
{
await new Promise(r => setTimeout(r, 1000));
console.log("request destroyed", req.destroyed);
let pass = new PassThrough();
pipeline(req, pass, e => console.log("pipeline finished", e));
for await (let chunk of pass) console.log("received", chunk.length);
console.log("body processed");
res.end();
});
(async function()
{
await new Promise(resolve => server.listen(resolve));
let req = http.request({ port: server.address().port, method: "post" });
req.on("error", () => null);
req.write(Buffer.alloc(10000));
setTimeout(() => req.destroy(), 500);
}());
|
I find that if one stream has been destroyed, the pipeline will call the callback with an ERR_STREAM_PREMATURE_CLOSE now. I used this test case: const {
pipeline,
PassThrough,
} = require('stream');
{
const r = new PassThrough();
const d = new PassThrough();
d.on('data', (data) => {
console.log(data);
});
r.write('aaa');
r.destroy();
// make sure r is destroyed
process.nextTick(function () {
pipeline([r, d], (err) => {
console.log(err);
});
});
} On node version 16.13.2, it will output:
On the master branch, it will output:
Do I understand this issue correctly? Do we still need to throw a Error directly? as in https://github.com/nodejs/node/pull/36791/files#r665904922 |
@meixg Does this work for you with the promisified version of |
Not sure if this is the same isue, but interestingly I see a difference between how this behaves inside an async function (which seems to fail fast as expected) vs. how this runs in the REPL calling await. This one works -
But in the REPL it gets stuck like this ...
|
This looks correct to me. You are destroying the writable before it has ended, hence it will get a premature close. |
@guymguym Seems unrelated. Could you maybe open a separate issue? |
Does the test case match the issue here? Or if it doesn't match, what the test case should look like? maybe I can work on that. |
I don't think this is a problem anymore. |
Uh oh!
There was an error while loading. Please reload this page.
pipeline
should immediately fail withERR_STREAM_DESTROYED
when any of the streams have already been destroyed.Readable
might need a little extra consideration since it's possible to read the data after being destroyed. Should maybe check_readableState.errored
and/or_readableState.ended
.Refs: #29227 (comment)
The text was updated successfully, but these errors were encountered: