Skip to content

Revert the ExecutionContext after processing requests #14027

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

davidfowl
Copy link
Member

  • If user code modifies the ExecutionContext it could leak to the next http request. This happens because nothing reverts to the original on the way out. We piggy back on the async state machine logic to do this.
  • Added a test

PS: I'm guessing this potentially regress the performance so this solution might not be the final one (depending on results).

Fixes #13991

- If user code modfies the ExecutionContext it could leak to the next http request. This happens because nothing reverts to the original on the way out. We piggy back on the async state machine logic to do this.
- Added a test
@davidfowl davidfowl added this to the 5.0.0-preview1 milestone Sep 16, 2019
@halter73
Copy link
Member

Are you planning to rewrite this to manually capture and restore the execution context before merging @davidfowl?

@davidfowl
Copy link
Member Author

Yes but I want to measure this first and try out the new API I proposed here https://github.com/dotnet/corefx/issues/41131


context.Response.ContentLength = 1;
return context.Response.WriteAsync(value.ToString());
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been playing with this test in this branch to see how these changes affect the ExecutionContext, but even after these changes the behavior is kinda weird IMHO. I was taking a look at the actual value of local.Value in the OnStarting and OnCompleted callbacks, and it doesn't make much sense to me.

The following version of ExecuteApplication still passes the test with these changes:

Task ExecuteApplication(HttpContext context)
{
    var value = local.Value;
    Assert.Equal(0, value);
    local.Value++;

    context.Response.OnStarting(() =>
    {
        Assert.Equal(1, local.Value);
        local.Value++;
        return Task.CompletedTask;
    });

    context.Response.OnCompleted(() =>
    {
        Assert.Equal(0, local.Value);
        local.Value++;
        return Task.CompletedTask;
    });

    context.Response.ContentLength = 1;
    return context.Response.WriteAsync(value.ToString());
}

Notice that local.Value is 1 at the start of OnStarting callback, but reverts back to 0 by the time the OnCompleted callback is called.

I think that if local.Value was reverted to 0 in the OnStarting callback or if local.Value was not reverted in either callback and remained 2 at the start of the OnCompleted callback, it would be more consistent/logical.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One other option that might be less weird would be to reset after the entire request. I looked at that but it felt overly complicated

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anurse and I talked about this. We think this behavior is the result of OnStarting being invoked inside of application.ProcessRequestAsync via Response.WriteAsync, whereas OnCompleted gets called after application.ProcessRequestAsync exits ignoring a possible call to Response.PipeWriter.CompleteAsync() or something like that.

I don't think there's much we can do to fix that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might still be worth adding those extra asserts in the OnStarting/OnCompleted callbacks so we don't unintentionally change/regress behavior around this though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, this might actually be problematic if you expect to access async locals in OnStarting without having written anything (though that might be rare). I need to run some tests with the IHttpContextAccessor to see how it affects those callbacks. This is basically a breaking change

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One other option that might be less weird would be to reset after the entire request. I looked at that but it felt overly complicated

Wasn't sure if that would be correct behaviour; but it should be simplier. Give me a few mins will make a PR so you can see what it looks like.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That adds more per request allocation in other cases. I was trying to avoid that.

"0");
}
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following the HTTP/2 version of the test to catch possible regression. It should be able to go into any Http2TestBase-derived test class.

[Fact]
public async Task Frame_MultipleStreams_DoesNotLeakAcrossRequests()
{
    var local = new AsyncLocal<int>();

    // It's important this method isn't async as that will revert the ExecutionContext
    Task ExecuteApplication(HttpContext context)
    {
        var value = local.Value;
        Assert.Equal(0, value);
        local.Value++;

        context.Response.OnStarting(() =>
        {
            Assert.Equal(1, local.Value);
            local.Value++;
            return Task.CompletedTask;
        });

        context.Response.OnCompleted(() =>
        {
            Assert.Equal(0, local.Value);
            local.Value++;
            return Task.CompletedTask;
        });

        context.Response.ContentLength = 1;
        return context.Response.WriteAsync(value.ToString());
    }

    await InitializeConnectionAsync(ExecuteApplication);

    await StartStreamAsync(1, _browserRequestHeaders, endStream: true);
    await ExpectAsync(Http2FrameType.HEADERS,
        withLength: 55,
        withFlags: (byte)Http2HeadersFrameFlags.END_HEADERS,
        withStreamId: 1);

    var dataFrame1 = await ExpectAsync(Http2FrameType.DATA,
        withLength: 1,
        withFlags: (byte)Http2DataFrameFlags.NONE,
        withStreamId: 1);

    await ExpectAsync(Http2FrameType.DATA,
        withLength: 0,
        withFlags: (byte)Http2DataFrameFlags.END_STREAM,
        withStreamId: 1);

    Assert.Equal((byte)'0', dataFrame1.Payload.Span[0]);

    await StartStreamAsync(3, _browserRequestHeaders, endStream: true);
    await ExpectAsync(Http2FrameType.HEADERS,
        withLength: 55,
        withFlags: (byte)Http2HeadersFrameFlags.END_HEADERS,
        withStreamId: 3);

    var dataFrame2 = await ExpectAsync(Http2FrameType.DATA,
        withLength: 1,
        withFlags: (byte)Http2DataFrameFlags.NONE,
        withStreamId: 3);

    await ExpectAsync(Http2FrameType.DATA,
        withLength: 0,
        withFlags: (byte)Http2DataFrameFlags.END_STREAM,
        withStreamId: 3);

    Assert.Equal((byte)'0', dataFrame2.Payload.Span[0]);

    await StopConnectionAsync(expectedLastStreamId: 3, ignoreNonGoAwayFrames: false);
}

@analogrelay
Copy link
Contributor

@davidfowl What are we doing here? There are two PRs, this one and @benaadams 's one #14146 . Seems like the sentiment in the triage room is that we like @benaadams 's one better.

@davidfowl
Copy link
Member Author

@davidfowl What are we doing here? There are two PRs, this one and @benaadams 's one #14146 . Seems like the sentiment in the triage room is that we like @benaadams 's one better.

Agree, waiting for performance runs to test the changes. I'll close this one.

@davidfowl davidfowl closed this Nov 6, 2019
@dougbu dougbu deleted the davidfowl/new-executioncontext branch May 18, 2020 19:43
@amcasey amcasey added area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions and removed area-runtime labels Jun 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Kestrel is causing AsyncLocal to bleed context data across requests on same HTTP/1.1 connection
5 participants