-
Notifications
You must be signed in to change notification settings - Fork 43
Document the 'wal_cleanup_delay' option #3350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi! Thanks for the patch!
Please see my comments below.
|
||
.. confval:: wal_cleanup_delay | ||
|
||
Since version :doc:`2.8.1 </release/2.8.1>`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The option already appeared in 2.6.3 (2.6.2-148-gd129729d9) and 2.7.2 (2.7.1-152-ge8c094644) (we had rolling releases back then, so 2.8.1 was released together with 2.7.2. and 2.6.3). This was considered a bugfix, so it was cherry-picked from 2.8.1 to the older versions.
Since version :doc:`2.8.1 </release/2.8.1>`. | ||
The delay (in seconds) used to prevent the :ref:`Tarantool garbage collector <cfg_checkpoint_daemon-garbage-collector>` | ||
from immediate removing :ref:`write-ahead log<internals-wal>` files after a node restart. | ||
This delay helps :ref:`replicas <replication-roles>` sync with a master faster after its restart and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't say it helps to sync faster: it eliminates the risk that master deletes WALs needed for replicas after a restart. It fixes a possible erroneous situation rather than improves anything.
Although, yes, this means replicas do not risk downloading the data all over again, and thus can sync faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fixes! LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. Just one question
from immediately removing :ref:`write-ahead log<internals-wal>` files after a node restart. | ||
This delay eliminates possible erroneous situations when the master deletes WALs | ||
needed by :ref:`replicas <replication-roles>` after restart. | ||
As a consequence, replicas sync with a master faster after its restart and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the master?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, thanks
|
||
.. NOTE:: | ||
|
||
The ``wal_cleanup_delay`` option is not in effect if a node is running as an |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I get it right , this wording will be more precise:
The ``wal_cleanup_delay`` option is not in effect if a node is running as an | |
The ``wal_cleanup_delay`` option has no effect on nodes running as |
I mean, it still has effect on other nodes in the replica set except anonymous replicas, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, sounds better.
Document the 'wal_cleanup_delay' option. Resolves #2022
Add the
wal_cleanup_delay
option to the reference.Resolves #2022