-
Notifications
You must be signed in to change notification settings - Fork 51
Refactor promise and object stream dispatches #422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
wooldridge
commented
Sep 28, 2017
- Simplify the multipart object stream path. No longer use QueuedReader; instead write directly to the through2 output stream.
- Separate object creation and object queuing for the multipart promise and multipart object stream cases. operation.makeObject() handles object creation; queuing (or writing) of the made objects is handled in the responder.
- Track metadata objects in a this.nextMetadataBuffer property in operation.
- Add comments.
@@ -248,17 +276,18 @@ MultipartDispatcher.prototype.promise = function dispatchMultipartPromise( | |||
var objectQueue = new FifoQueue(2); | |||
var partReaderQueue = new FifoQueue(3); | |||
|
|||
var metadataBuffer = null; | |||
|
|||
var parsingParts = 0; | |||
var parsedParts = 0; | |||
|
|||
var hasParsed = false; | |||
var hasEnded = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a need for hasEnded (and the end listener and the branch firing drain on Dicer) in the Promise case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We currently have a responseEndListener that sets hasEnded in the multipart promise case. Are saying when that happens, make sure to drain Dicer in case that stream is currently paused?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the inclarity. I'm saying that I believe there's never any point in firing the drain event -- not on Dicer or any other WritableStream. My reading of https://nodejs.org/dist/latest-v8.x/docs/api/stream.html#stream_event_drain is that the drain event is a notification fired by the WriteableStream and not a request to the WriteableStream. In other words, the existing code was grossly mistaken in firing this event.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, that's how I understand it, too. I will remove the firing of the "drain" event in the chunked case, I'd overlooked that. And since we do a direct pipe of data in the chunked case, and pipe handles backpressure, we should be OK there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only significant question is whether to test the hasEnded state before firing pause() in the object stream case. More generally, good to see this cleaner implementation.
lib/responder.js
Outdated
// Manage backpressure | ||
if (writeResult === false) { | ||
// Only pause stream if not all parsed | ||
if (hasParsed && !partReaderQueue.hasItem()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe test whether the HTTP ReadableStream has ended? In which case there's nothing to pause?
Back pressure handling in responder's object stream dispatch
lib/responder.js
Outdated
} | ||
|
||
var drainListener = function objectDrainListener() { | ||
response.resume(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two belated questions
- Should the drain listener resume only if the response has not ended?
- Should the drain listener start a concat stream on the next part?
- To be more resillent on the previous bullet, should the dispatcher have an additional flag in its state to keep track of whether the first part is waiting or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for the comments. I addressed the first two bullets with fdde417. For the third bullet, seems like existing queue methods are enough to keep track. By "first part," do you mean the first part initially added to the queue or just a part in the queue during a resume step?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is whether the drain event might ever be fired when the first part is already concatenating. In principle, that shouldn't happen, but sometimes it's useful to guard against things that shouldn't happen. If an existing flag already records whether the first part is concatenating, then nothing else is needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does setting an isConcatenating flag here address your concerns? ea415d5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most definitely. Thanks.
- Check that response hasn't ended before calling resume() - After resuming, kick off part processing if queue has item
Also comment out FifoQueue methods that are no longer used.