You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I want to be able to block a known bad query from reaching my database and taking it down. A bad query example could be something that's too expensive to run at scale, e.g. a sequential scan.
Describe the solution you'd like
I want to be able to identify the most expensive queries, e.g. SHOW EXPENSIVE QUERIES;, and block any query based on some kind of identifier, like a hash, e.g. BLOCK QUERY a5dc34a;. The pooler should return an error to the client every time the client tries to run a query matching that hash, e.g. FATAL: query blocked by pooler.
A simple measure of cost can be how long a query took to run, in milliseconds.
The hash could be the same pg_stat_statements uses, to keep things consistent.
Describe alternatives you've considered
I don't think there is anything like that which exists in the Postgres ecosystem.
Additional context
Large monoliths have issues with releasing bad queries that affect multiple products. This can help block queries such queries and restore the app to a healthy state quicker than a traditional rollback.
The text was updated successfully, but these errors were encountered:
@blisabda thank you for linking me to that! that makes query blocking way easier. i've been using that library in a new PR that i'm working on #525 and can confirm it works amazing. I'm glad this uses the postgres parser internal code too, so I have very high confidence it will match what we see in pg_stat_statements.
Uh oh!
There was an error while loading. Please reload this page.
Is your feature request related to a problem? Please describe.
I want to be able to block a known bad query from reaching my database and taking it down. A bad query example could be something that's too expensive to run at scale, e.g. a sequential scan.
Describe the solution you'd like
I want to be able to identify the most expensive queries, e.g.
SHOW EXPENSIVE QUERIES;
, and block any query based on some kind of identifier, like a hash, e.g.BLOCK QUERY a5dc34a;
. The pooler should return an error to the client every time the client tries to run a query matching that hash, e.g.FATAL: query blocked by pooler
.A simple measure of cost can be how long a query took to run, in milliseconds.
The hash could be the same
pg_stat_statements
uses, to keep things consistent.Describe alternatives you've considered
I don't think there is anything like that which exists in the Postgres ecosystem.
Additional context
Large monoliths have issues with releasing bad queries that affect multiple products. This can help block queries such queries and restore the app to a healthy state quicker than a traditional rollback.
The text was updated successfully, but these errors were encountered: