-
-
Notifications
You must be signed in to change notification settings - Fork 105
feat: fetch immediately after IDLE #4974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
21cbe7e
to
9e48f99
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice cleanup as far as i can tell.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great that you digged this old issue up! My gut feeling is that we shouldn't merge a PR to such a sensitive area right before the release (plus, I did find one blocker for this PR so we can't merge it right away), but even if we don't it will be nice to get something like this in after the release!
// Fetch the watched folder | ||
// immediately after IDLE to reduce message delivery latency. | ||
if let Err(err) = connection | ||
.fetch_move_delete(ctx, &watch_folder, folder_meaning) | ||
.await | ||
.context("idle") | ||
.context("fetch_move_delete") | ||
{ | ||
Ok(session) => { | ||
connection.session = Some(session); | ||
} | ||
Err(err) => { | ||
connection.trigger_reconnect(ctx); | ||
warn!(ctx, "{:#}", err); | ||
} | ||
connection.trigger_reconnect(ctx); | ||
warn!(ctx, "{:#}", err); | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few lines above it says:
// Fetch the watched folder again in case scanning other folder moved messages
// there.
//
// In most cases this will select the watched folder and return because there are
// no new messages. We want to select the watched folder anyway before going IDLE
// there, so this does not take additional protocol round-trip.
[...] fetch_move_delete(...)
This can probably be removed if we fetch here.
src/scheduler.rs
Outdated
// Fetch the watched folder. | ||
if let Err(err) = connection | ||
.fetch_move_delete(ctx, &watch_folder, folder_meaning) | ||
.await | ||
.context("fetch_move_delete") | ||
{ | ||
connection.trigger_reconnect(ctx); | ||
warn!(ctx, "{:#}", err); | ||
return InterruptInfo::new(false); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocker: Some lines further down, it says:
// Scan additional folders only after finishing fetching the watched folder.
//
// On iOS the application has strictly limited time to work in background, so we may not
// be able to scan all folders before time is up if there are many of them.
Now that fetch_mode_delete
was moved down, it will be done after scanning, which will be a problem for iOS, then. Probably we need another fetch_move_delete()
before the loop
s. In inbox_loop
we then need to make sure that fetch_existing_msgs()
(and possibly resync_folders()
, I didn't look into it) is executed before fetch_move_delete()
, which will need some additional logic.
9e1128b
to
5000b5a
Compare
5000b5a
to
2ca0ec2
Compare
ca190eb
to
d7aecab
Compare
2ca0ec2
to
243317b
Compare
I measured with https://github.com/deltachat/pingpong, this does not have any measurable impact when running on a fast network, from TL;DR 90% of the time is spent waiting between sending a message over SMTP and the server waking up IDLE client. We should optimize the server, not the client. With this PR (round trip time in seconds, sending a message to the bot and receiving a reply back, no encryption, no moving to DeltaChat folder):
With 1.130.0 core:
This is with <1ms RTT when running This is how it goes in the logs:
The most interesting part is:
It took 47.116 - 46.600 = 0.516 seconds from the time |
1abb12e
to
2af9ff1
Compare
Closing, see #3783 (comment) |
Closes #3783