|
| 1 | +Tarantool 3.3 |
| 2 | +============= |
| 3 | + |
| 4 | +Release date: November 29, 2024 |
| 5 | + |
| 6 | +Releases on GitHub: :tarantool-release:`3.3.0` |
| 7 | + |
| 8 | +The 3.3 release of Tarantool adds the following main product features and improvements for the Community and Enterprise editions: |
| 9 | + |
| 10 | +* **Community Edition (CE)** |
| 11 | + |
| 12 | + * Improvements around queries with offsets. |
| 13 | + * Improvement in Raft implementation. |
| 14 | + * Persistent replication state. |
| 15 | + * New C API for sending work to the TX thread from user threads. |
| 16 | + * JSON cluster configuration schema. |
| 17 | + * New ``on_event`` callback in application roles. |
| 18 | + * API for user-defined alerts. |
| 19 | + * Isolated instance mode. |
| 20 | + * Automatic instance expulsion. |
| 21 | + * New configuration option for Lua memory size. |
| 22 | + |
| 23 | +* **Enterprise Edition (EE)** |
| 24 | + |
| 25 | + * Offset-related improvements in read views. |
| 26 | + * Supervised failover improvements. |
| 27 | + |
| 28 | +.. _3-3-features-for-developers: |
| 29 | + |
| 30 | +Developing applications |
| 31 | +----------------------- |
| 32 | + |
| 33 | +.. _3-3-offset: |
| 34 | + |
| 35 | +Improved offset processing |
| 36 | +~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 37 | + |
| 38 | +Tarantool 3.3 brings a number of improvements around queries with offsets. |
| 39 | + |
| 40 | +- The performance of tree index :ref:`select() <box_index-select>` with offset and |
| 41 | + :ref:`count() <box_index-count>` methods was improved. |
| 42 | + Previously, the algorithm complexity had a linear dependency on the |
| 43 | + provided offset size (``O(offset)``) or the number of tuples to count. Now, |
| 44 | + the new algorithm complexity is ``O(log(size))`` where ``size`` is the number of tuples |
| 45 | + in the index. This change also eliminates the dependency on the offset value or |
| 46 | + the number of tuples to count. |
| 47 | +- The :ref:`index <box_index>` and :ref:`space <box_space>` entities get a new |
| 48 | + ``offset_of`` method that returns the position relative to the given iterator |
| 49 | + direction of the tuple that matches the given key. |
| 50 | + |
| 51 | + .. code-block:: lua |
| 52 | +
|
| 53 | + -- index: {{1}, {3}} |
| 54 | + index:offset_of({3}, {iterator = 'eq'}) -- returns 1: [1, <3>] |
| 55 | + index:offset_of({3}, {iterator = 'req'}) -- returns 0: [<3>, 1] |
| 56 | +
|
| 57 | +- The ``offset`` parameter has been added to the :ref:`index:pairs() <box_index-pairs>` method, |
| 58 | + allowing to skip the first tuples in the iterator. |
| 59 | + |
| 60 | +Same improvements are also introduced to :ref:`read views <read_views>` in the Enterprise Edition. |
| 61 | + |
| 62 | +- Improved performance of the tree index read view ``select()`` with offset. |
| 63 | +- A new ``offset_of()`` method of index read views. |
| 64 | +- A new ``offset`` parameter in the ``index_read_view:pairs()`` method. |
| 65 | + |
| 66 | +.. _3-3-sync-no-timeout: |
| 67 | + |
| 68 | +No rollback on timeout for synchronous transactions |
| 69 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 70 | + |
| 71 | +To better match the canonical Raft algorithm design, Tarantool no longer rolls |
| 72 | +back synchronous transactions on timeout (upon reaching :ref:`replication.synchro_timeout <cfg_replication-replication_synchro_timeout>`). |
| 73 | +In the new implementation, transactions can only be rolled back by a new leader after it is elected. |
| 74 | +Otherwise, they can wait for a quorum infinitely. |
| 75 | + |
| 76 | +Given this change in behavior, a new ``replication_synchro_timeout`` :ref:`compat <compat-module>` option is introduced. |
| 77 | +To try the new behavior, set this option to ``new``: |
| 78 | + |
| 79 | +- In YAML configuration: |
| 80 | + |
| 81 | + .. code-block:: yaml |
| 82 | +
|
| 83 | + compat: |
| 84 | + replication_synchro_timeout: new |
| 85 | +
|
| 86 | +- In Lua code: |
| 87 | + |
| 88 | + .. code-block:: tarantoolsession |
| 89 | +
|
| 90 | + tarantool> require('compat').replication_synchro_timeout = 'new' |
| 91 | + --- |
| 92 | + ... |
| 93 | +
|
| 94 | +There is also a new ``replication.synchro_queue_max_size`` configuration option |
| 95 | +that limits the total size of transactions in the master synchronous queue. The default |
| 96 | +value is 16 megabytes. |
| 97 | + |
| 98 | +.. _3-3-c-api-tx-thread: |
| 99 | + |
| 100 | +C API for sending work to TX thread |
| 101 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 102 | + |
| 103 | +New public C API functions ``tnt_tx_push()`` and ``tnt_tx_flush()`` |
| 104 | +allow to send work to the :ref:`TX thread <thread_model>` from any other thread: |
| 105 | + |
| 106 | +- ``tnt_tx_push()`` schedules the given callback to be executed with the provided |
| 107 | + arguments. |
| 108 | + |
| 109 | +- ``tnt_tx_flush()`` sends all pending callbacks for execution in the TX thread. |
| 110 | + Execution is started in the same order as the callbacks were pushed. |
| 111 | + |
| 112 | +.. _3-3-json-config-schema: |
| 113 | + |
| 114 | +JSON schema of the cluster configuration |
| 115 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 116 | + |
| 117 | +Tarantool cluster configuration schema is now available in the JSON format. |
| 118 | +A schema lists configuration options of a certain Tarantool version with descriptions. |
| 119 | +As of Tarantool 3.3 release date, the following versions are available: |
| 120 | + |
| 121 | +- `3.0.0 <https://download.tarantool.org/tarantool/schema/config.schema.3.0.0.json>`__ |
| 122 | +- `3.0.1 <https://download.tarantool.org/tarantool/schema/config.schema.3.0.1.json>`__ |
| 123 | +- `3.0.2 <https://download.tarantool.org/tarantool/schema/config.schema.3.0.2.json>`__ |
| 124 | +- `3.1.0 <https://download.tarantool.org/tarantool/schema/config.schema.3.1.0.json>`__ |
| 125 | +- `3.1.1 <https://download.tarantool.org/tarantool/schema/config.schema.3.1.1.json>`__ |
| 126 | +- `3.1.2 <https://download.tarantool.org/tarantool/schema/config.schema.3.1.2.json>`__ |
| 127 | +- `3.2.0 <https://download.tarantool.org/tarantool/schema/config.schema.3.2.0.json>`__ |
| 128 | +- `3.2.1 <https://download.tarantool.org/tarantool/schema/config.schema.3.2.1.json>`__ |
| 129 | +- `3.3.0 <https://download.tarantool.org/tarantool/schema/config.schema.3.3.0.json>`__ |
| 130 | + |
| 131 | +Additionally, there is the `latest <https://download.tarantool.org/tarantool/schema/config.schema.json>`__ |
| 132 | +schema that reflects the latest configuration schema in development (master branch). |
| 133 | + |
| 134 | +Use these schemas to add code completion for YAML configuration files and get |
| 135 | +hints with option descriptions in your IDE, or validate your configurations, |
| 136 | +for example, with `check-jsonschema <https://pypi.org/project/check-jsonschema/>`__: |
| 137 | + |
| 138 | +.. code-block:: console |
| 139 | +
|
| 140 | + $ check-jsonschema --schemafile https://download.tarantool.org/tarantool/schema/config.schema.3.3.0.json config.yaml |
| 141 | +
|
| 142 | +There is also a new API for generating the JSON configuration schema as a Lua table -- |
| 143 | +the ``config:jsonschema()`` function. |
| 144 | + |
| 145 | +.. _3-3-roles-on-event: |
| 146 | + |
| 147 | +on_event callbacks in roles |
| 148 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 149 | + |
| 150 | +Now :ref:`application roles <application_roles>` can have ``on_event`` callbacks. |
| 151 | +They are executed every time a ``box.status`` :ref:`system event <system-events>` is |
| 152 | +broadcast or the configuration is updated. The callback has three arguments: |
| 153 | + |
| 154 | +- ``config`` -- the current configuration. |
| 155 | +- ``key`` -- an event that has triggered the callback: ``config.apply`` or ``box.status``. |
| 156 | +- ``value`` -- the value of the ``box.status`` :ref:`system event <system-events>`. |
| 157 | + |
| 158 | +Example: |
| 159 | + |
| 160 | +.. code-block:: lua |
| 161 | +
|
| 162 | + return { |
| 163 | + name = 'my_role', |
| 164 | + validate = function() end, |
| 165 | + apply = function() end, |
| 166 | + stop = function() end, |
| 167 | + on_event = function(config, key, value) |
| 168 | + local log = require('log') |
| 169 | +
|
| 170 | + log.info('on_event is triggered by ' .. key) |
| 171 | + log.info('is_ro: ' .. value.is_ro) |
| 172 | + log.info('roles_cfg.my_role.foo: ' .. config.foo) |
| 173 | + end, |
| 174 | + } |
| 175 | +
|
| 176 | +.. _3-3-alert-api: |
| 177 | + |
| 178 | +API for raising alerts |
| 179 | +~~~~~~~~~~~~~~~~~~~~~~ |
| 180 | + |
| 181 | +Now developers can raise their own alerts from their application or application roles. |
| 182 | +For this purpose, a new API is introduced into the ``config`` module. |
| 183 | + |
| 184 | +The ``config:new_alerts_namespace()`` function creates a new |
| 185 | +*alerts namespace* -- a named container for user-defined alerts: |
| 186 | + |
| 187 | +.. code-block:: lua |
| 188 | +
|
| 189 | + local config = require('config') |
| 190 | + local alerts = config:new_alerts_namespace('my_alerts') |
| 191 | +
|
| 192 | +Alerts namespaces provide methods for managing alerts within them. All user-defined |
| 193 | +alerts raised in all namespaces are shown in ``box.info.config.alerts``. |
| 194 | + |
| 195 | +To raise an alert, use the namespace methods ``add()`` or ``set()``: |
| 196 | +The difference between them is that ``set()`` accepts a key to refer to the alert |
| 197 | +later: overwrite or discard it. An alert is a table with one mandatory field ``message`` |
| 198 | +(its value is logged) and arbitrary used-defined fields. |
| 199 | + |
| 200 | +.. code-block:: lua |
| 201 | +
|
| 202 | + -- Raise a new alert. |
| 203 | + alerts:add({ |
| 204 | + message = 'Test alert', |
| 205 | + my_field = 'my_value', |
| 206 | + }) |
| 207 | +
|
| 208 | + -- Raise a new alert with a key. |
| 209 | + alerts:set("my_alert", { |
| 210 | + message = 'Test alert', |
| 211 | + my_field = 'my_value', |
| 212 | + }) |
| 213 | +
|
| 214 | +You can discard alerts individually by keys using the ``unset()`` method, or |
| 215 | +all at once using ``clear()``: |
| 216 | + |
| 217 | +.. code-block:: lua |
| 218 | +
|
| 219 | + alerts:unset("my_alert") |
| 220 | + alerts:clear() |
| 221 | +
|
| 222 | +.. _3-3-administration-and-maintenance: |
| 223 | + |
| 224 | +Administration and maintenance |
| 225 | +------------------------------ |
| 226 | + |
| 227 | +.. _3-3-upgrade-ddl: |
| 228 | + |
| 229 | +DDL before upgrade |
| 230 | +~~~~~~~~~~~~~~~~~~~ |
| 231 | + |
| 232 | +Since version 3.3, Tarantool allows DDL operations before calling ``box.schema.upgrade()`` |
| 233 | +during an upgrade if the source schema version is 2.11.1 or later. This allows, |
| 234 | +for example, granting execute access to user-defined functions in the cluster configuration |
| 235 | +before the schema is upgraded. |
| 236 | + |
| 237 | +.. _3-3-isolated-instances: |
| 238 | + |
| 239 | +Isolated instances |
| 240 | +~~~~~~~~~~~~~~~~~~ |
| 241 | + |
| 242 | +A new instance-level configuration option ``isolated`` puts an instance into the |
| 243 | +*isolated* mode. In this mode, an instance doesn't accept updates from other members |
| 244 | +of its replica set and other iproto requests. It also performs no background |
| 245 | +data modifications and remains in read-only mode. |
| 246 | + |
| 247 | +.. code-block:: yaml |
| 248 | +
|
| 249 | + groups: |
| 250 | + group-001: |
| 251 | + replicasets: |
| 252 | + replicaset-001: |
| 253 | + instances: |
| 254 | + instance-001: {} |
| 255 | + instance-002: {} |
| 256 | + instance-003: |
| 257 | + isolated: true |
| 258 | +
|
| 259 | +Use the isolated mode to temporarily isolate instances for maintenance, debugging, |
| 260 | +or other actions that should not affect other cluster instances. |
| 261 | + |
| 262 | +.. _3-3-autoexpel: |
| 263 | + |
| 264 | +Automatic expulsion of removed instances |
| 265 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 266 | + |
| 267 | +A new configuration section ``replication.autoexpel`` allows to automatically expel |
| 268 | +instances after they are removed from the YAML configuration. |
| 269 | + |
| 270 | +.. code-block:: yaml |
| 271 | +
|
| 272 | + replication: |
| 273 | + autoexpel: |
| 274 | + enabled: true |
| 275 | + by: prefix |
| 276 | + prefix: '{{ replicaset_name }}' |
| 277 | +
|
| 278 | +The section includes three options: |
| 279 | + |
| 280 | +- ``enabled``: whether automatic expulsion logic is enabled in the cluster. |
| 281 | +- ``by``: a criterion for selecting instances that can be expelled automatically. |
| 282 | + In version 3.3, the only available criterion is ``prefix``. |
| 283 | +- ``prefix``: a prefix with which an instance name should start to make automatic expulsion possible. |
| 284 | + |
| 285 | + |
| 286 | +.. _3-3-lua-memory-size: |
| 287 | + |
| 288 | +Lua memory size |
| 289 | +~~~~~~~~~~~~~~~ |
| 290 | + |
| 291 | +A new configuration option ``lua.memory`` specifies the maximum amount of memory |
| 292 | +for Lua scripts execution, in bytes. For example, this configuration sets the Lua memory |
| 293 | +limit to 4 GB: |
| 294 | + |
| 295 | +.. code-block:: yaml |
| 296 | +
|
| 297 | + lua: |
| 298 | + memory: 4294967296 |
| 299 | +
|
| 300 | +The default limit is 2 GB. |
| 301 | + |
| 302 | +.. _3-3-supervised-failover-improvements: |
| 303 | + |
| 304 | +Supervised failover improvements |
| 305 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 306 | + |
| 307 | +Tarantool 3.3 is receiving a number of supervised failover improvements: |
| 308 | + |
| 309 | +* Support for Tarantool-based :ref:`stateboard <supervised_failover_overview_fault_tolerance>` |
| 310 | + as an alternative to etcd. |
| 311 | +* Instance priority configuration: new ``failover.priority`` configuration section. |
| 312 | + This section specify the instances' relative order of being appointed by a coordinator: |
| 313 | + bigger values mean higher priority. |
| 314 | + |
| 315 | + .. code-block:: yaml |
| 316 | +
|
| 317 | + failover: |
| 318 | + replicasets: |
| 319 | + replicaset-001: |
| 320 | + priority: |
| 321 | + instance-001: 5 |
| 322 | + instance-002: -5 |
| 323 | + instance-003: 4 |
| 324 | +
|
| 325 | + Additionally, there is a ``failover.learners`` section that lists instances |
| 326 | + that should never be appointed as replica set leaders: |
| 327 | + |
| 328 | + .. code-block:: yaml |
| 329 | +
|
| 330 | + failover: |
| 331 | + replicasets: |
| 332 | + replicaset-001: |
| 333 | + learners: |
| 334 | + - instance-004 |
| 335 | + - instance-005 |
| 336 | +
|
| 337 | +* Automatic failover configuration update. |
| 338 | +* Failover logging configuration with new configuration options ``failover.log.to`` |
| 339 | + and ``failover.log.file``: |
| 340 | + |
| 341 | + .. code-block:: yaml |
| 342 | +
|
| 343 | + failover: |
| 344 | + log: |
| 345 | + to: file # or stderr |
| 346 | + file: var/log/tarantool/failover.log |
| 347 | +
|
| 348 | +Learn more about supervised failover in :ref:`repl_supervised_failover`. |
| 349 | + |
| 350 | + |
| 351 | +.. _3-3-persistent-wal-gc: |
| 352 | + |
| 353 | +Persistent replication state |
| 354 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 355 | + |
| 356 | +Tarantool :ref:`persistence mechanism <concepts-data_model-persistence>` uses |
| 357 | +two types of files: snapshots and write-ahead log (WAL) files. These files are also used |
| 358 | +for replication: read-only replicas receive data changes from the replica set leader |
| 359 | +by reading these files. |
| 360 | + |
| 361 | +The :ref:`garbage collector <configuration_persistence_garbage_collector>` |
| 362 | +cleans up obsolete snapshots and WAL files, but it doesn't remove the files while they |
| 363 | +are in use for replication. To make such a check possible, the replica set leaders |
| 364 | +store the replication state in connection with files. However, this information |
| 365 | +was not persisted, which could lead to issues in case of the leader restart. |
| 366 | +The garbage collector could delete WAL files after the restart even if there were |
| 367 | +replicas that still read these files. The :ref:`wal.cleanup_delay <configuration_reference_wal_cleanup_delay>` |
| 368 | +configuration option was used to prevent such situations. |
| 369 | + |
| 370 | +Since version 3.3, leader instances persist the information about WAL files in use |
| 371 | +in a new system space ``_gc_consumers``. After a restart, the replication state |
| 372 | +is restored, and WAL files needed for replication are protected from garbage collection. |
| 373 | +This eliminates the need to keep all WAL files after a restart, so the ``wal.cleanup_delay`` |
| 374 | +option is now deprecated. |
0 commit comments