-
Notifications
You must be signed in to change notification settings - Fork 459
feat: Queue extra packets in the adapter #1491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The send queue is meant to be used to store individual messages and read them back in batched chunks (to be sent on the network). It's also meant to act as a sort of overflow storage when UTP internal queues are full. The receive queue is meant to store batched messages (batched by the send queue) and read them out one at a time.
When handling incoming messages from the transport
{ | ||
var batchReader = | ||
new FastBufferReader(nativeData, Allocator.None, data.Count, data.Offset); | ||
new FastBufferReader(nativeData + data.Offset, Allocator.None, data.Count); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might have missed something, but why are we moving the data.Offset
out of the offset argument?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I should have added a comment about that change. Basically the previous behavior was wrong when the ArraySegment
returned by the transport is not for a segment starting at the beginning of its backing array (data.Offset
greater than 0).
FastBufferReader
is written such that the length you pass as the third argument here is the full length of the buffer pointed to by the first argument. But here that's not what we're providing; data.Count
is the length of the segment of the array we're handling and we were passing a pointer to the backing array of that segment as the buffer.
It might be tempting instead to keep the offset and pass the length of the backing array (data.Array.Length
) instead of data.Count
, but that's also wrong since the backing array could go on after the end of the segment. The correct value to pass as the length in this case would be data.Offset + data.Count
but that's kinda confusing. It's simpler to just make it look to FastBufferReader
like it's dealing with a clean buffer without any starting offset.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a change that should be applied to FastBufferReader itself? When I was writing that part of FastBufferReader I debated with myself on whether the third argument should be the total buffer size or the size to use past the offset. I'm willing to accept I may have chosen wrong here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have any strong opinion on whether that length parameter should be the total length of the buffer or the length past the offset. My concern is more that the documentation of the constructors seems to indicate it's the latter, while internally the code appears to use the former definition.
For the current change, I went with the simplest and smallest targeted fix I could find, considering I'm not familiar with the FastBufferReader
code and the implications of modifying it.
This is an immediate point of concern because it means we'll be fixing this for the people who are currently having this issue by introducing new issues to other users. I don't think we should change the meaning of an existing setting in a way that makes existing configurations invalid. IMO, we should deprecate this setting and make a new one with a sensible default so that users who upgrade don't find themselves suddenly broken. |
Fair point. Will deprecate that setting and introduce a new one. |
m_SendQueueBatchSize was effectively the maximum payload size and this is what it will stay as. The new maximum payload size setting has been removed. The size of the send queue is now controlled by m_MaxSendQueueSize.
Following comment from @ShadauxCat, changed the behavior to avoid breaking existing user configurations. Instead of reusing the 'Send Queue Batch Size' setting to control the size of the send queue, a new one is introduced ('Max Send Queue Size'). The setting 'Send Queue Batch Size' stays on as the maximum size of a single batch of sends, which is also effectively the maximum payload size (I dropped the new setting for this). This is exactly what that setting meant before (even though the naming is a tad unfortunate). The description of the PR has been edited accordingly. |
* Add batched send/receive queues The send queue is meant to be used to store individual messages and read them back in batched chunks (to be sent on the network). It's also meant to act as a sort of overflow storage when UTP internal queues are full. The receive queue is meant to store batched messages (batched by the send queue) and read them out one at a time. * Allow tests to access internals * Remove dummy tests * Update UnityTransport to use new batched queues * Update transport tests * Add latest changes to CHANGELOG * Minor stylistic improvements for consistency * Fix whitespace issue * Ignore test until fix for UTP is released * Correctly handle ArraySegment not at beginning of array When handling incoming messages from the transport * Don't change how existing settings behave m_SendQueueBatchSize was effectively the maximum payload size and this is what it will stay as. The new maximum payload size setting has been removed. The size of the send queue is now controlled by m_MaxSendQueueSize. * Improve "too many in-flight" error message * Had left conflict markers in .meta files... * Stylistic improvements Co-authored-by: Andrew Spiering <[email protected]>
This addresses MTT-1887.
UTP is limited to 32 in-flight packets on reliable pipelines. When that limit is reached, the error about having "too many in-flight packets" is produced. Many users have reported seeing this error and it's a frustrating one because there's nothing they can do about it (except reducing the scope of their game, which is probably not acceptable).
Increasing the limit in UTP would impose additional overhead to reliable packets, something that the DOTS NetCode team feels strongly against. Increasing the limit without adding this extra overhead would pose a significant engineering challenge. Instead, the chosen solution is to queue packets in the adapter when UTP's limit is reached. This is a similar situation to what DOTS NetCode is doing in a similar situation.
A new configuration setting ('Max Send Queue Size') is introduced to control the size of that queue. The existing 'Send Queue Batch Size' setting remains and controls the maximum size of a single batch of sends (which is effectively the maximum payload size). The naming is a bit unfortunate, but it preserves the previous behavior.
A note on the implementation: the new queue (
BatchedSendQueue
) is written in such a way that it could be passed to a Burst-compiled job. My intention for a future change is to put the batched sending of packets into its own job. This is also why all that code has been moved into theUpdate
method.Once/If approved, I intend to backport these changes to 1.0.0 and 1.1.0 since it resolves an issue that hits many users (the "too many in-flight packets" error).
Changelog
com.unity.netcode.adapter.utp
Testing and Documentation