Description
Is your feature request related to a problem? Please describe.
Long callbacks run for all Figures on all of my application's pages. When I switch between pages quickly the tasks in the backend Celery queue execute in-order regardless of old requests being outdated.
Ex: If I'm on page 1, and I navigate to page 2, then quickly to page 3 then page 4, the long callbacks for figures corresponding to pages 2, then 3, then 4 will run in order, slowing the delivery of Figures pertinent to page 4.
However, this isn't the case if I refresh a single page multiple times. If I'm on page 2 and I refresh the page 5 times, the long callbacks for the Figures associated with page 2 don't execute 5 times in order- the first 4 'render requests' are 'revoked' in the Celery queue, with the final request completing.
Describe the solution you'd like
I'd like a page switch to trigger the termination of newly 'outdated' long callbacks in the backend.
Describe alternatives you've considered
In the universe of Dash objects, I considered implementing a 'Sentry' that watches the URL and triggers a 'cancel' function that's bound to all long callbacks, cancelling them via the 'cancelable' interface of callbacks. In the Dash execution graph, however, I don't think I can guarantee that this callback would always precede the scheduling of new, up-to-date callbacks, so I don't think this is a reasonable solution.
Additional context
Here's an idea for a patch:
The dash_renderer frontend is aware of when it's going to try to execute a callback for a job that is still running. If a to-be-called callback's output list matches the output list of the job that the frontend is waiting for, it issues an 'oldJob' ID to the backend via the request headers. Source
In the backend, receiving these 'oldJob' ID's triggers that job's termination in the Celery backend. Source
It's evident that the frontend is doing job bookkeeping, tracking the 'output' list of jobs that have been scheduled in the backend but that haven't returned for the frontend. If the frontend could also track the page that the job is intended for, and compare that to the window.location.pathname when the job cleanup is already happening, the 'oldJob' param could be set and the backend could clean up any currently running long callbacks for the previous page.
A parameter could be set in the Dash python object to enable and disable this feature- it'd generally NOT be mutually exclusive of memoization because cancellations wouldn't always happen but it'd be more conservative, and memoization wouldn't always have an opportunity to happen.