Skip to content

Conversation

davidfowl
Copy link
Member

@davidfowl davidfowl commented May 11, 2021

  • Increase the IIS default pause threshold to 1MB by default. This matches kestrel. Today IIS only allows for 65K of slack for incoming request bodies. This affects the maximum size of the read possible when doing large uploads.
  • Left the setting internal for now with the intent of exposing a public API in the future. Added a public API to configure the buffer size.
  • Remove double thread pool dispatch. Today we dispatch continuations from the IIS thread to the thread pool thread. This resulted in 2 dispatches per read. These continuations are within our own managed code implementation so we can safely assume they don't block. Get rid of one of them. This was causing issues, so will revisit that later.

Improves #32467

Anecdotal performance improvements 3x for large uploads.

@ghost ghost added the area-runtime label May 11, 2021
@JamesNK
Copy link
Member

JamesNK commented May 12, 2021

What are the before/after numbers?

@davidfowl
Copy link
Member Author

I still need to measure. I'm blocked on updating the sdk to a reasonable baseline right now since there's a breaking change in there that breaks VS. before this change though I hacked something that did similar things and saw a 3x improvement in upload latency (this was a single client though). I've yet to write an automated benchmark for this scenario. I've been using the customer's application to test

@davidfowl
Copy link
Member Author

To answer your question though it went froM 9 seconds to upload 700MB to 2.5 seconds

- Increase the IIS default pause threshold to 1MB by default. This matches kestrel. Today IIS only allows for 65K of slack for incoming request bodies. This affects the maximum size of the read possible when doing large uploads.
- Left the setting internal for now with the intent of exposing a public API in the future.
@davidfowl davidfowl force-pushed the davidfowl/bump-iis-buffer branch from 06983af to 77b7453 Compare May 13, 2021 04:17
@halter73
Copy link
Member

To answer your question though it went froM 9 seconds to upload 700MB to 2.5 seconds

So it went from about .6 gbps about 2.2 gbps on a single loopback connection? Do you have any instructions to try it ourselves?

The downside of this is the increased per-request memory consumption when the app isn't quickly reading the body. Kestrel has a similar limit, but in HTTP/2 it's shared across the connection so you could argue it's up to 100x less in that case.

@davidfowl
Copy link
Member Author

Server

app.MapPost("/", async (context) =>
{
    context.Features.Get<IHttpMaxRequestBodySizeFeature>().MaxRequestBodySize = null;

    Console.WriteLine(context.Request.ContentLength);
    using var fs = new FileStream(targetPath, FileMode.Create);

    var reader = context.Request.BodyReader;

    while (true)
    {
        var read = await reader.ReadAsync();
        var buffer = read.Buffer;
        
        if (!buffer.IsEmpty)
        {
            var length = (int)buffer.Length;
            var data = ArrayPool<byte>.Shared.Rent(length);
            buffer.CopyTo(data);
            await fs.WriteAsync(data.AsMemory(0, length));
            ArrayPool<byte>.Shared.Return(data);
        }

        reader.AdvanceTo(buffer.End);

        if (read.IsCompleted) break;
    }
});

Client

using System.Diagnostics;
using System.IO;
using System.Net.Http;
using static System.Console;

string url = "http://localhost:5000";
var path =  ""; // Big file goes here.
using var file = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read, 81920);
var length = file.Length / (1024 * 1024);
var request = new HttpRequestMessage(HttpMethod.Post, url)
{
    Content = new StreamContent(file, 81920)
};
var sw = Stopwatch.StartNew();
var response = await new HttpClient().SendAsync(request);
WriteLine(response);
WriteLine($"Uploaded {length}MB in {sw.ElapsedMilliseconds}ms");
ReadLine();

@jkotalik
Copy link
Contributor

I'd be curious what the difference is for HTTPS.

@halter73
Copy link
Member

And HTTP/2 as well.

@JamesNK
Copy link
Member

JamesNK commented May 13, 2021

Big file speed with HttpClient and HTTP/2 is limited by the client's window size issues. You'd need to test with something else 😬

@davidfowl
Copy link
Member Author

All we're doing here is changing the buffer size so the app can read bigger than 65K chunks. The number I chose was 1MB to match kestrel transport buffer but it is a bit different so I don't mind tweaking numbers. At the end of the day it needs to be configurable. I'm not going to spend too much time right now finding the right number for all scenarios, for the specific scenario filed, (http/1.1 upload) this improves the situation.

I can spend time making exposing the option and leaving the default 65K if that's preferable but I'm not going to setup multiple environments to run this in before merging this.

I'll make sure we have a decent file upload benchmark for 6.0 though

@davidfowl
Copy link
Member Author

/azp run aspnetcore-ci

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@halter73
Copy link
Member

👍 for adding the option. As you say, a similar setting is exposed in Kestrel. This should make benchmarking various configurations easier down the road.

@davidfowl
Copy link
Member Author

@halter73 OK, how about I expose it in this PR and it goes to API review on Monday?

@davidfowl davidfowl added this to the 6.0-preview5 milestone May 14, 2021
@davidfowl davidfowl added feature-iis Includes: IIS, ANCM api-ready-for-review API is ready for formal API review - https://github.com/dotnet/apireviews labels May 14, 2021
@ghost
Copy link

ghost commented May 14, 2021

Thank you for submitting this for API review. This will be reviewed by @dotnet/aspnet-api-review at the next meeting of the ASP.NET Core API Review group. Please ensure you take a look at the API review process documentation and ensure that:

  • The PR contains changes to the reference-assembly that describe the API change. Or, you have included a snippet of reference-assembly-style code that illustrates the API change.
  • The PR describes the impact to users, both positive (useful new APIs) and negative (breaking changes).
  • Someone is assigned to "champion" this change in the meeting, and they understand the impact and design of the change.

@pranavkm pranavkm added api-approved API was approved in API review, it can be implemented and removed api-ready-for-review API is ready for formal API review - https://github.com/dotnet/apireviews labels May 17, 2021
@davidfowl davidfowl merged commit 7b0f2b6 into main May 17, 2021
@davidfowl davidfowl deleted the davidfowl/bump-iis-buffer branch May 17, 2021 19:28
@amcasey amcasey added area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions and removed area-runtime labels Jun 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api-approved API was approved in API review, it can be implemented area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions feature-iis Includes: IIS, ANCM

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants