-
Notifications
You must be signed in to change notification settings - Fork 10.5k
Improvements to IIS IO #32570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvements to IIS IO #32570
Conversation
What are the before/after numbers? |
I still need to measure. I'm blocked on updating the sdk to a reasonable baseline right now since there's a breaking change in there that breaks VS. before this change though I hacked something that did similar things and saw a 3x improvement in upload latency (this was a single client though). I've yet to write an automated benchmark for this scenario. I've been using the customer's application to test |
To answer your question though it went froM 9 seconds to upload 700MB to 2.5 seconds |
- Increase the IIS default pause threshold to 1MB by default. This matches kestrel. Today IIS only allows for 65K of slack for incoming request bodies. This affects the maximum size of the read possible when doing large uploads. - Left the setting internal for now with the intent of exposing a public API in the future.
06983af
to
77b7453
Compare
So it went from about .6 gbps about 2.2 gbps on a single loopback connection? Do you have any instructions to try it ourselves? The downside of this is the increased per-request memory consumption when the app isn't quickly reading the body. Kestrel has a similar limit, but in HTTP/2 it's shared across the connection so you could argue it's up to 100x less in that case. |
Serverapp.MapPost("/", async (context) =>
{
context.Features.Get<IHttpMaxRequestBodySizeFeature>().MaxRequestBodySize = null;
Console.WriteLine(context.Request.ContentLength);
using var fs = new FileStream(targetPath, FileMode.Create);
var reader = context.Request.BodyReader;
while (true)
{
var read = await reader.ReadAsync();
var buffer = read.Buffer;
if (!buffer.IsEmpty)
{
var length = (int)buffer.Length;
var data = ArrayPool<byte>.Shared.Rent(length);
buffer.CopyTo(data);
await fs.WriteAsync(data.AsMemory(0, length));
ArrayPool<byte>.Shared.Return(data);
}
reader.AdvanceTo(buffer.End);
if (read.IsCompleted) break;
}
}); Clientusing System.Diagnostics;
using System.IO;
using System.Net.Http;
using static System.Console;
string url = "http://localhost:5000";
var path = ""; // Big file goes here.
using var file = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read, 81920);
var length = file.Length / (1024 * 1024);
var request = new HttpRequestMessage(HttpMethod.Post, url)
{
Content = new StreamContent(file, 81920)
};
var sw = Stopwatch.StartNew();
var response = await new HttpClient().SendAsync(request);
WriteLine(response);
WriteLine($"Uploaded {length}MB in {sw.ElapsedMilliseconds}ms");
ReadLine(); |
I'd be curious what the difference is for HTTPS. |
And HTTP/2 as well. |
Big file speed with HttpClient and HTTP/2 is limited by the client's window size issues. You'd need to test with something else 😬 |
All we're doing here is changing the buffer size so the app can read bigger than 65K chunks. The number I chose was 1MB to match kestrel transport buffer but it is a bit different so I don't mind tweaking numbers. At the end of the day it needs to be configurable. I'm not going to spend too much time right now finding the right number for all scenarios, for the specific scenario filed, (http/1.1 upload) this improves the situation. I can spend time making exposing the option and leaving the default 65K if that's preferable but I'm not going to setup multiple environments to run this in before merging this. I'll make sure we have a decent file upload benchmark for 6.0 though |
/azp run aspnetcore-ci |
Azure Pipelines successfully started running 1 pipeline(s). |
👍 for adding the option. As you say, a similar setting is exposed in Kestrel. This should make benchmarking various configurations easier down the road. |
@halter73 OK, how about I expose it in this PR and it goes to API review on Monday? |
Thank you for submitting this for API review. This will be reviewed by @dotnet/aspnet-api-review at the next meeting of the ASP.NET Core API Review group. Please ensure you take a look at the API review process documentation and ensure that:
|
Co-authored-by: Pranav K <[email protected]>
Left the setting internal for now with the intent of exposing a public API in the future.Added a public API to configure the buffer size.Remove double thread pool dispatch. Today we dispatch continuations from the IIS thread to the thread pool thread. This resulted in 2 dispatches per read. These continuations are within our own managed code implementation so we can safely assume they don't block. Get rid of one of them.This was causing issues, so will revisit that later.Improves #32467
Anecdotal performance improvements 3x for large uploads.