-
Notifications
You must be signed in to change notification settings - Fork 10.3k
Request limits middleware PoC #32886
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
7130ee3
to
9d02c46
Compare
|
||
namespace RequestLimiterSample | ||
{ | ||
[RequestLimit(requestPerSecond: 10)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's unlikely this would ever be hardcoded, I wonder how useful this is as an attribute. Maybe we need it for completeness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean hardcoded? I think it's probably useful to provide a default implementation that takes in common parameters such as "x request per second" since it's super easy to use. Otherwise, requiring the configuration of policies in ConfigureServices and specifying a policy name here may be overly complicated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean read from configuration and set based on the environment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, config driven limits. I haven't implemented that part yet.
private readonly long _newResourcePerSecond; | ||
|
||
private Timer _renewTimer; | ||
// This is racy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
41d5bb4
to
52e0b9c
Compare
|
||
if (attributes == null) | ||
{ | ||
// TODO: Apply default policy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion : add another metadata to skip rate limit
state: null, | ||
onDispose: resource => Release(requestedCount)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this code be used to remove the closure?
state: null, | |
onDispose: resource => Release(requestedCount)); | |
state: (this, requestedCount), | |
onDispose: resource => | |
{ | |
(var concurrencyLimiter, var _requestedCount) = resource as (ConcurrencyLimiter, long) | |
concurrencyLimiter.Release(_requestedCount)); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern with that approach was that State is exposed to the user. I'm not sure exposing these internal details is a good idea. But I agree that there needs to be a way to not capture the closure.
dc44609
to
191575d
Compare
{ | ||
await Task.Delay(5000); | ||
await context.Response.WriteAsync("Default!"); | ||
}).EnforceLimit(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EnforceLimit
gives no context here for what kind of limit.
LimitRequests
?
LimitConcurrency
?
LimitRequestRate
? as opposed to body rate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one enforces the default limit by adding a blank attribute. I can change the name to be more explicit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we want to distinguish between this and other limits like request sizes, data rates, etc..
{ | ||
while (resourceLeases.TryPop(out var resource)) | ||
{ | ||
_logger.LogInformation("Releasing resource"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Debug, and we want some way to identity the resource being released like the policy name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ooops, this is meant to be placeholder logging.
if (_options.ResolveDefaultRequestLimit != null) | ||
{ | ||
if (!await ApplyLimitAsync(_options.ResolveDefaultRequestLimit(context.RequestServices), context, resourceLeases)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default only applies if there's an attribute but no values? There's no way to apply a default to requests without an attribute? AuthZ added a FallbackPolicy that applies to all requests without an attribute.
public AuthorizationPolicy? FallbackPolicy { get; set; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question, I'm not sure which behaviour we want. Or maybe even both?
return false; | ||
} | ||
|
||
_logger.LogInformation("Resource obtained"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Debug. Identify the resource?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And never use these logging things as strings, but this is a draft.
var policy = new RequestLimiterPolicy(); | ||
configurePolicy(policy); | ||
|
||
PolicyMap[name] = policy; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check for conflicts? This overwrites. Otherwise it should be SetPolicy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does authZ do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same 😢
public void AddPolicy(string name, Action<AuthorizationPolicyBuilder> configurePolicy) | |
{ | |
if (name == null) | |
{ | |
throw new ArgumentNullException(nameof(name)); | |
} | |
if (configurePolicy == null) | |
{ | |
throw new ArgumentNullException(nameof(configurePolicy)); | |
} | |
var policyBuilder = new AuthorizationPolicyBuilder(); | |
configurePolicy(policyBuilder); | |
PolicyMap[name] = policyBuilder.Build(); | |
} |
|
||
namespace Microsoft.AspNetCore.RequestLimiter | ||
{ | ||
// TODO: update implementation with WaitAsync and use MemoryCache instead of ConcurrentDictionary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be closer to a "remote store" based limiter example. Something like a RedisRateLimiter we envision. It's for illustration purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should think about that. We need to do what's best. I worry that we'll lose access to APIs that IMemoryCache hide.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. Though I'm curious what you think we lose by using IMemoryCache APIs. I think it depends on how we use it.
private Task<bool> ApplyLimitAsync(AggregatedResourceLimiter<HttpContext> limiter, HttpContext context, Stack<ResourceLease> obtainedResources) | ||
{ | ||
_logger.LogInformation("Resource count: " + limiter.EstimatedCount(context)); | ||
var resourceLeaseTask = limiter.WaitAsync(context); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass in the RequestAborted token
var resourceLeases = new Stack<ResourceLease>(); | ||
try | ||
{ | ||
foreach (var attribute in attributes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed in review, but I'll leave my original comment here for others:
Appling multiple limits in sequence can cause some strange side effects. E.g. If the first limit hits a soft cap then I get queued until that's released. If the second hit another soft cap then you'd be stuck in line again, likely behind other requests that arrived later and didn't acquire the first limit. If the third resource hits a hard cap and I get refused immediately then why did I bother waiting for the the other two? Would it make sense to wait for all limits in parallel and abort if any are refused? It would also make multiple soft limits more fare.
Could you also get into a deadlock condition? Two requests acquiring resources in the opposite order,
b544026
to
5d30a80
Compare
f04fb9f
to
838149d
Compare
838149d
to
e33467d
Compare
Moving to asplabs. |
TODO: