diff --git a/doc/book/admin/admin_instances_dev.png b/doc/book/admin/admin_instances_dev.png new file mode 100644 index 0000000000..7461ae40f0 Binary files /dev/null and b/doc/book/admin/admin_instances_dev.png differ diff --git a/doc/book/admin/admin_instances_prod.png b/doc/book/admin/admin_instances_prod.png new file mode 100644 index 0000000000..143cd9f832 Binary files /dev/null and b/doc/book/admin/admin_instances_prod.png differ diff --git a/doc/book/admin/instance_config.rst b/doc/book/admin/instance_config.rst index 2e4df21b38..35bacc21a2 100644 --- a/doc/book/admin/instance_config.rst +++ b/doc/book/admin/instance_config.rst @@ -1,177 +1,204 @@ .. _admin-instance_config: +.. _admin-instance-environment-overview: +.. _admin-tt_config_file: -Instance configuration -====================== +Application environment +======================= -For each Tarantool instance, you need two files: +This section provides a high-level overview on how to prepare a Tarantool application for deployment +and how the application's environment and layout might look. +This information is helpful for understanding how to administer Tarantool instances using :ref:`tt CLI ` in both development and production environments. -* [Optional] An :ref:`application file ` with - instance-specific logic. Put this file into the ``/usr/share/tarantool/`` - directory. +The main steps of creating and preparing the application for deployment are: - For example, ``/usr/share/tarantool/my_app.lua`` (here we implement it as a - :ref:`Lua module ` that bootstraps the database and - exports ``start()`` function for API calls): +1. :ref:`admin-instance_config-init-environment`. - .. code-block:: lua +2. :ref:`admin-instance_config-develop-app`. - local function start() - box.schema.space.create("somedata") - box.space.somedata:create_index("primary") - <...> - end +3. :ref:`admin-instance_config-package-app`. - return { - start = start; - } +In this section, a `sharded_cluster `_ application is used as an example. +This cluster includes 5 instances: one router and 4 storages, which constitute two replica sets. -* An :ref:`instance file ` with - instance-specific initialization logic and parameters. Put this file, or a - symlink to it, into the **instance directory** - (see ``instances_enabled`` parameter in :ref:`tt configuration file `). +.. image:: admin_instances_dev.png + :align: left + :width: 700 + :alt: Cluster topology - For example, ``/etc/tarantool/instances.enabled/my_app.lua`` (here we load - ``my_app.lua`` module and make a call to ``start()`` function from that - module): - .. code-block:: lua - #!/usr/bin/env tarantool +.. _admin-instance_config-init-environment: +.. _admin-start_stop_instance-running_locally: - box.cfg { - listen = 3301; - } +Initializing a local environment +-------------------------------- - -- load my_app module and call start() function - -- with some app options controlled by sysadmins - local m = require('my_app').start({...}) +Before creating an application, you need to set up a local environment for ``tt``: -.. _admin-instance_file: +1. Create a home directory for the environment. -Instance file -------------- +2. Run ``tt init`` in this directory: + + .. code-block:: console -After this short introduction, you may wonder what an instance file is, what it -is for, and how ``tt`` uses it. After all, Tarantool is an application -server, so why not start the application stored in ``/usr/share/tarantool`` -directly? + ~/myapp$ tt init + • Environment config is written to 'tt.yaml' -A typical Tarantool application is not a script, but a daemon running in -background mode and processing requests, usually sent to it over a TCP/IP -socket. This daemon needs to be started automatically when the operating system -starts, and managed with the operating system standard tools for service -management -- such as ``systemd`` or ``init.d``. To serve this very purpose, we -created **instance files**. +This command creates a default ``tt`` configuration file ``tt.yaml`` for a local +environment and the directories for applications, control sockets, logs, and other +artifacts: -You can have more than one instance file. For example, a single application in -``/usr/share/tarantool`` can run in multiple instances, each of them having its -own instance file. Or you can have multiple applications in -``/usr/share/tarantool`` -- again, each of them having its own instance file. +.. code-block:: console -An instance file is typically created by a system administrator. An application -file is often provided by a developer, in a Lua rock or an rpm/deb package. + ~/myapp$ ls + bin distfiles include instances.enabled modules templates tt.yaml -An instance file is designed to not differ in any way from a Lua application. -It must, however, configure the database, i.e. contain a call to -:doc:`box.cfg{} ` somewhere in it, because it’s the -only way to turn a Tarantool script into a background process, and -``tt`` is a tool to manage background processes. Other than that, an -instance file may contain arbitrary Lua code, and, in theory, even include the -entire application business logic in it. We, however, do not recommend this, -since it clutters the instance file and leads to unnecessary copy-paste when -you need to run multiple instances of an application. +Find detailed information about the ``tt`` configuration parameters and launch modes +on the :ref:`tt configuration page `. -.. _admin-tt-preload: -Preloading Lua scripts and modules ----------------------------------- -Tarantool supports loading and running chunks of Lua code before the loading instance file. -To load or run Lua code immediately upon Tarantool startup, specify the ``TT_PRELOAD`` -environment variable. Its value can be either a path to a Lua script or a Lua module name: +.. _admin-instance_config-develop-app: +.. _admin-start_stop_instance-multi-instance: +.. _admin-start_stop_instance-multi-instance-layout: -* To run the Lua script ``script.lua`` from the ``preload/path/`` directory inside - the working directory in Tarantool before ``main.lua``, set ``TT_PRELOAD`` as follows: +Creating and developing an application +-------------------------------------- - .. code-block:: console +You can create an application in two ways: - $ TT_PRELOAD=/preload/path/script.lua tarantool main.lua +- Manually by preparing its layout in a directory inside ``instances_enabled``. + The directory name is used as the application identifier. - Tarantool runs the ``script.lua`` code, waits for it to complete, and - then starts running ``main.lua``. +- From a template by using the :ref:`tt create ` command. -* To load the ``preload.module`` into the Tarantool Lua interpreter - executing ``main.lua``, set ``TT_PRELOAD`` as follows: +In this example, the application's layout is prepared manually and looks as follows. - .. code-block:: console +.. code-block:: console + + ~/myapp$ tree + . + ├── bin + ├── distfiles + ├── include + ├── instances.enabled + │ └── sharded_cluster + │ ├── config.yaml + │ ├── instances.yaml + │ ├── router.lua + │ ├── sharded_cluster-scm-1.rockspec + │ └── storage.lua + ├── modules + ├── templates + └── tt.yaml + + +The ``sharded_cluster`` directory contains the following files: - $ TT_PRELOAD=preload.module tarantool main.lua +- ``config.yaml``: contains the :ref:`configuration ` of the cluster. This file might include the entire cluster topology or provide connection settings to a centralized configuration storage. +- ``instances.yml``: specifies instances to run in the current environment. For example, on the developer’s machine, this file might include all the instances defined in the cluster configuration. In the production environment, this file includes :ref:`instances to run on the specific machine `. +- ``router.lua``: includes code specific for a :ref:`router `. +- ``sharded_cluster-scm-1.rockspec``: specifies the required external dependencies (for example, ``vshard``). +- ``storage.lua``: includes code specific for :ref:`storages `. - Tarantool loads the ``preload.module`` code into the interpreter and - starts running ``main.lua`` as if its first statement was ``require('preload.module')``. +You can find the full example here: +`sharded_cluster `_. + + + +.. _admin-instance_config-package-app: +.. _admin-instance-app-layout: +.. _admin-instance_file: - .. warning:: +Packaging the application +------------------------- - ``TT_PRELOAD`` values that end with ``.lua`` are considered scripts, - so avoid module names with this ending. +To package the ready application, use the :ref:`tt pack ` command. +This command can create an installable DEB/RPM package or generate ``.tgz`` archive. -To load several scripts or modules, pass them in a single quoted string, separated -by semicolons: +The structure below reflects the content of the packed ``.tgz`` archive for the `sharded_cluster `_ application: .. code-block:: console - $ TT_PRELOAD="/preload/path/script.lua;preload.module" tarantool main.lua + ~/myapp$ tree -a + . + ├── bin + │ ├── tarantool + │ └── tt + ├── include + ├── instances.enabled + │ └── sharded_cluster -> ../sharded_cluster + ├── modules + ├── sharded_cluster + │ ├── .rocks + │ │ └── share + │ │ └── ... + │ ├── config.yaml + │ ├── instances.yaml + │ ├── router.lua + │ ├── sharded_cluster-scm-1.rockspec + │ └── storage.lua + └── tt.yaml -In the preload script, the three dots (``...``) value contains the module name -if you're preloading a module or the path to the script if you're running a script. -The :ref:`arg ` value from the main script is visible in -the preload script or module. +The application's layout looks similar to the one defined when :ref:`developing the application ` with some differences: -For example, when preloading this script: +- ``bin``: contains the ``tarantool`` and ``tt`` binaries packed with the application bundle. -.. code-block:: lua +- ``instances.enabled``: contains a symlink to the packed ``sharded_cluster`` application. - -- preload.lua -- - print("Preloading:") - print("... arg is:", ...) - print("Passed args:", arg[1], arg[2]) +- ``sharded_cluster``: a packed application. In addition to files created during the application development, includes the ``.rocks`` directory containing application dependencies (for example, ``vshard``). -You get the following output: +- ``tt.yaml``: a ``tt`` configuration file. -.. code-block:: console - $ TT_PRELOAD=preload.lua tarantool main.lua arg1 arg2 - Preloading: - ... arg is: preload.lua - Passed args: arg1 arg2 - 'strip_core' is set but unsupported - ... main/103/main.lua I> Tarantool 2.11.0-0-g247a9a4 Darwin-x86_64-Release - ... main/103/main.lua I> log level 5 - ... main/103/main.lua I> wal/engine cleanup is paused - < ... > -If an error happens during the execution of the preload script or module, Tarantool -reports the problem and exits. +.. _admin-instances_to_run: -.. _admin-tt_config_file: +Instances to run +~~~~~~~~~~~~~~~~ -tt configuration file ---------------------- +One more difference for a deployed application is the content of the ``instances.yaml`` file that specifies instances to run in the current environment. -While instance files contain instance configuration, the :ref:`tt ` configuration file -contains the configuration that ``tt`` uses to set up the application environment. -This includes the path to instance files, various working directories, and other -parameters that connect the application to the system. +- On the developer's machine, this file might include all the instances defined in the cluster configuration. -To create a default ``tt`` configuration, run ``tt init``. This creates a ``tt.yaml`` -configuration file. Its location depends on the :ref:`tt launch mode ` -(system or local). + .. image:: admin_instances_dev.png + :align: left + :width: 700 + :alt: Cluster topology -Some ``tt`` configuration parameters are similar to those used by -:doc:`box.cfg{} `, for example, ``memxt_dir`` -or ``wal_dir``. Other parameters define the ``tt`` environment, for example, -paths to installation files used by ``tt`` or to connected :ref:`external modules `. + ``instances.yaml``: -Find the detailed information about the ``tt`` configuration parameters and launch modes -on the :ref:`tt configuration page `. + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/instances.yaml + :language: yaml + :dedent: + +- In the production environment, this file includes instances to run on the specific machine. + + .. image:: admin_instances_prod.png + :align: left + :width: 700 + :alt: Cluster topology + + ``instances.yaml`` (Server-001): + + .. code-block:: yaml + + router-a-001: + + ``instances.yaml`` (Server-002): + + .. code-block:: yaml + + storage-a-001: + storage-b-001: + + ``instances.yaml`` (Server-003): + + .. code-block:: yaml + + storage-a-002: + storage-b-002: + + +The :ref:`Starting and stopping instances ` section describes how to start and stop Tarantool instances. diff --git a/doc/book/admin/start_stop_instance.rst b/doc/book/admin/start_stop_instance.rst index c85416a9ef..e84ef23cdc 100644 --- a/doc/book/admin/start_stop_instance.rst +++ b/doc/book/admin/start_stop_instance.rst @@ -3,335 +3,387 @@ Starting and stopping instances =============================== -To start a Tarantool instance from an :ref:`instance file ` -using the :ref:`tt ` utility: +This section describes how to manage instances in a Tarantool cluster using the :ref:`tt ` utility. +A cluster can include multiple instances that run different code. +A typical example is a cluster application that includes router and storage instances. +Particularly, you can perform the following actions: -1. Place the instance file (for example, ``my_app.lua``) into ``/etc/tarantool/instances.enabled/``. - This is the default location where ``tt`` searches for instance files. +* start all instances in a cluster or only specific ones +* check the status of instances +* connect to a specific instance +* stop all instances or only specific ones -2. Run ``tt start``: +To get more context on how the application's environment might look, refer to :ref:`Application environment `. - .. code-block:: console +.. NOTE:: - $ tt start - • Starting an instance [my_app]... + In this section, a `sharded_cluster `_ application is used to demonstrate how to start, stop, and manage instances in a cluster. -In this case, ``tt`` starts an instance from any ``*.lua`` file it finds in ``/etc/tarantool/instances.enabled/``. -Starting instances ------------------- +.. _configuration_run_instance: -All the instance files or directories placed in the ``instances_enabled`` directory -specified in :ref:`tt configuration ` are called *enabled instances*. -If there are several enabled instances, ``tt start`` starts a separate Tarantool -instance for each of them. +Starting Tarantool instances +---------------------------- -Learn more about working with multiple Tarantool instances in -:ref:`Multi-instance applications `. +.. _configuration_run_instance_tt: -To start a specific enabled instance, specify its name in the ``tt start`` argument: +Starting instances using the tt utility +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. code-block:: console +The :ref:`tt ` utility is the recommended way to start Tarantool instances. - $ tt start my_app - • Starting an instance [my_app]... +.. code-block:: console -When starting an instance, ``tt`` uses its :ref:`configuration file ` -``tt.yaml`` to set up a :ref:`tt environment ` in which the instance runs. -The default ``tt`` configuration file is created automatically in ``/etc/tarantool/``. -Learn how to set up a ``tt`` environment in a directory of your choice in -:ref:`Running Tarantool locally `. + $ tt start sharded_cluster + • Starting an instance [sharded_cluster:storage-a-001]... + • Starting an instance [sharded_cluster:storage-a-002]... + • Starting an instance [sharded_cluster:storage-b-001]... + • Starting an instance [sharded_cluster:storage-b-002]... + • Starting an instance [sharded_cluster:router-a-001]... -After the instance has started and worked for some time, you can find its artifacts +After the cluster has started and worked for some time, you can find its artifacts in the directories specified in the ``tt`` configuration. These are the default -locations: +locations in the local :ref:`launch mode `: + +* ``sharded_cluster/var/log//`` -- instance :ref:`logs `. +* ``sharded_cluster/var/lib//`` -- :ref:`snapshots and write-ahead logs `. +* ``sharded_cluster/var/run//`` -- control sockets and PID files. + +In the system launch mode, artifacts are created in these locations: + +* ``/var/log/tarantool//`` +* ``/var/lib/tarantool//`` +* ``/var/run/tarantool//`` + + +.. _configuration_run_instance_tarantool: + +Starting an instance using the tarantool command +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``tarantool`` command provides additional :ref:`options ` that might be helpful for development purposes. +Below is the syntax for starting a Tarantool instance configured in a file: + +.. code-block:: console -* ``/var/log/tarantool/.log`` -- instance :ref:`logs `. -* ``/var/lib/tarantool//`` -- snapshots and write-ahead logs. -* ``/var/run/tarantool/.control`` -- a control socket. This is - a Unix socket with the Lua console attached to it. This file is used to connect - to the instance console. -* ``/var/run/tarantool/.pid`` -- a PID file that ``tt`` uses to - check the instance status and send control commands. + $ tarantool [OPTION ...] --name INSTANCE_NAME --config CONFIG_FILE_PATH + +The command below starts ``router-a-001`` configured in the ``config.yaml`` file: + +.. code-block:: console + + $ tarantool --name router-a-001 --config config.yaml + + + +.. _admin-start_stop_instance_management: Basic instance management ------------------------- -.. note:: +Most of the commands described in this section can be called with or without an instance name. +Without the instance name, they are executed for all instances defined in ``instances.yaml``. - These commands can be called without an instance name. In this case, they are - executed for all enabled instances. -``tt`` provides a set of commands for performing basic operations over instances: +.. _admin-start_stop_instance_check_status: -* ``tt check`` -- check the instance file for syntax errors: +Checking an instance's status +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - .. code-block:: console +To check the status of instances, execute :ref:`tt status `: - $ tt check my_app - • Result of check: syntax of file '/etc/tarantool/instances.enabled/my_app.lua' is OK +.. code-block:: console -* ``tt status`` -- check the instance status: + $ tt status sharded_cluster + INSTANCE STATUS PID + sharded_cluster:storage-a-001 RUNNING 2023 + sharded_cluster:storage-a-002 RUNNING 2026 + sharded_cluster:storage-b-001 RUNNING 2020 + sharded_cluster:storage-b-002 RUNNING 2021 + sharded_cluster:router-a-001 RUNNING 2022 - .. code-block:: console +To check the status of a specific instance, you need to specify its name: - $ tt status my_app - INSTANCE STATUS PID - my_app NOT RUNNING +.. code-block:: console -* ``tt restart`` -- restart the instance: + $ tt status sharded_cluster:storage-a-001 + INSTANCE STATUS PID + sharded_cluster:storage-a-001 RUNNING 2023 - .. code-block:: console - $ tt restart my_app -y - • The Instance my_app (PID = 729) has been terminated. - • Starting an instance [my_app]... +.. _admin-start_stop_instance_connect: - The ``-y`` option responds "yes" to the confirmation prompt automatically. +Connecting to an instance +~~~~~~~~~~~~~~~~~~~~~~~~~ -* ``tt stop`` -- stop the instance: +To connect to the instance, use the :ref:`tt connect ` command: - .. code-block:: console +.. code-block:: console - $ tt stop my_app - • The Instance my_app (PID = 639) has been terminated. + $ tt connect sharded_cluster:storage-a-001 + • Connecting to the instance... + • Connected to sharded_cluster:storage-a-001 -* ``tt clean`` -- remove instance artifacts: logs, snapshots, and other files. + sharded_cluster:storage-a-001> - .. code-block:: console +In the instance's console, you can execute commands provided by the :ref:`box ` module. +For example, :ref:`box.info ` can be used to get various information about a running instance: + +.. code-block:: console + + sharded_cluster:storage-a-001> box.info.ro + --- + - false + ... - $ tt clean my_app -f - • List of files to delete: - • /var/log/tarantool/my_app.log - • /var/lib/tarantool/my_app/00000000000000000000.snap - • /var/lib/tarantool/my_app/00000000000000000000.xlog - The ``-f`` option removes the files without confirmation. +.. _admin-start_stop_instance_restart: -.. _admin-start_stop_instance-multi-instance: +Restarting instances +~~~~~~~~~~~~~~~~~~~~ -Multi-instance applications ---------------------------- +To restart an instance, use :ref:`tt restart `: -Tarantool applications can include multiple instances that run different code. -A typical example is a cluster application that includes router and storage -instances. The ``tt`` utility enables managing such applications. -With a single ``tt`` call, you can: +.. code-block:: console + + $ tt restart sharded_cluster:storage-a-002 -* start an application on multiple instances -* check the status of application instances -* connect to a specific instance of an application -* stop a specific instance of an application or all its instances +After executing ``tt restart``, you need to confirm this operation: -.. _admin-start_stop_instance-multi-instance-layout: +.. code-block:: console -Application layout + Confirm restart of 'sharded_cluster:storage-a-002' [y/n]: y + • The Instance sharded_cluster:storage-a-002 (PID = 2026) has been terminated. + • Starting an instance [sharded_cluster:storage-a-002]... + + +.. _admin-start_stop_instance_stop: + +Stopping instances ~~~~~~~~~~~~~~~~~~ -To create a multi-instance application, prepare its layout -in a directory inside ``instances_enabled``. The directory name is used as -the application identifier. +To stop the specific instance, use :ref:`tt stop ` as follows: -This directory should contain the following files: +.. code-block:: console -* The default instance file named ``init.lua``. This file is used for all - instances of the application unless there are specific instance files (see below). -* The instances configuration file ``instances.yml`` with instance names followed by colons: + $ tt stop sharded_cluster:storage-a-002 - .. code-block:: yaml +You can also stop all the instances at once as follows: - : - : - ... +.. code-block:: console - .. note:: + $ tt stop sharded_cluster + • The Instance sharded_cluster:storage-b-001 (PID = 2020) has been terminated. + • The Instance sharded_cluster:storage-b-002 (PID = 2021) has been terminated. + • The Instance sharded_cluster:router-a-001 (PID = 2022) has been terminated. + • The Instance sharded_cluster:storage-a-001 (PID = 2023) has been terminated. + • can't "stat" the PID file. Error: "stat /home/testuser/myapp/instances.enabled/sharded_cluster/var/run/storage-a-002/tt.pid: no such file or directory" - Do not use the dot (``.``) and dash (``-``) characters in the instance names. - They are reserved for system use. +.. note:: -* (Optional) Specific instances files. - These files should have names ``.init.lua``, where ```` - is the name specified in ``instances.yml``. - For example, if your application has separate source files for the ``router`` and ``storage`` - instances, place the router code in the ``router.init.lua`` file. + The error message indicates that ``storage-a-002`` is already not running. -For example, take a ``demo`` application that has three instances:``storage1``, -``storage2``, and ``router``. Storage instances share the same code, and ``router`` has its own. -The application directory ``demo`` inside ``instances_enabled`` must contain the following files: -* ``instances.yml`` -- the instances configuration: +.. _admin-start_stop_instance_remove_artifacts: - .. code-block:: yaml +Removing instance artifacts +~~~~~~~~~~~~~~~~~~~~~~~~~~~ - storage1: - storage2: - router: +The :ref:`tt clean ` command removes instance artifacts (such as logs or snapshots): -* ``init.lua`` -- the code of ``storage1`` and ``storage2`` -* ``router.init.lua`` -- the code of ``router`` +.. code-block:: console + $ tt clean sharded_cluster + • List of files to delete: -Identifying instances in code -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + • /home/testuser/myapp/instances.enabled/sharded_cluster/var/log/storage-a-001/tt.log + • /home/testuser/myapp/instances.enabled/sharded_cluster/var/lib/storage-a-001/00000000000000001062.snap + • /home/testuser/myapp/instances.enabled/sharded_cluster/var/lib/storage-a-001/00000000000000001062.xlog + • ... -When the application is working, each instance has associated environment variables -``TARANTOOL_INSTANCE_NAME`` and ``TARANTOOL_APP_NAME``. You can use them in the application -code to identify the instance on which the code runs. + Confirm [y/n]: -To obtain the instance and application names, use the following code: +Enter ``y`` and press ``Enter`` to confirm removing of artifacts for each instance. -.. code:: lua +.. note:: - local inst_name = os.getenv('TARANTOOL_INSTANCE_NAME') - local app_name = os.getenv('TARANTOOL_APP_NAME') + The ``-f`` option of the ``tt clean`` command can be used to remove the files without confirmation. -Managing multi-instance applications -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Start all three instances of the ``demo`` application: -.. code-block:: console - $ tt start demo - • Starting an instance [demo:router]... - • Starting an instance [demo:storage1]... - • Starting an instance [demo:storage2]... +.. _admin-tt-preload: -Check the status of ``demo`` instances: +Preloading Lua scripts and modules +---------------------------------- -.. code-block:: console +Tarantool supports loading and running chunks of Lua code before starting instances. +To load or run Lua code immediately upon Tarantool startup, specify the ``TT_PRELOAD`` +environment variable. Its value can be either a path to a Lua script or a Lua module name: - $ tt status demo - INSTANCE STATUS PID - demo:router RUNNING 55 - demo:storage1 RUNNING 56 - demo:storage2 RUNNING 57 +* To run the Lua script ``preload_script.lua`` from the ``sharded_cluster`` directory, set ``TT_PRELOAD`` as follows: -Check the status of a specific instance: + .. code-block:: console -.. code-block:: console + $ TT_PRELOAD=preload_script.lua tt start sharded_cluster - $ tt status demo:router - INSTANCE STATUS PID - demo:router RUNNING 55 + Tarantool runs the ``preload_script.lua`` code, waits for it to complete, and + then starts instances. -Connect to an instance: +* To load the ``preload_module`` from the ``sharded_cluster`` directory, set ``TT_PRELOAD`` as follows: -.. code-block:: console + .. code-block:: console - $ tt connect demo:router - • Connecting to the instance... - • Connected to /var/run/tarantool/demo/router/router.control + $ TT_PRELOAD=preload_module tt start sharded_cluster - /var/run/tarantool/demo/router/router.control> + .. note:: -Stop a specific instance: + ``TT_PRELOAD`` values that end with ``.lua`` are considered scripts, + so avoid module names with this ending. -.. code-block:: console +To load several scripts or modules, pass them in a single quoted string, separated +by semicolons: - $ tt stop demo:storage1 - • The Instance demo:storage1 (PID = 56) has been terminated. +.. code-block:: console -Stop all running instances of the ``demo`` application: + $ TT_PRELOAD="preload_script.lua;preload_module" tt start sharded_cluster -.. code-block:: console +If an error happens during the execution of the preload script or module, Tarantool +reports the problem and exits. - $ tt stop demo - • The Instance demo:router (PID = 55) has been terminated. - • can't "stat" the PID file. Error: "stat /var/run/tarantool/demo/storage1/storage1.pid: no such file or directory" - • The Instance demo:storage2 (PID = 57) has been terminated. -.. note:: - The error message indicates that ``storage1`` is already not running. -.. _admin-start_stop_instance-running_locally: +.. _configuration_command_options: -Running Tarantool locally -------------------------- +tarantool command-line options +------------------------------ -Sometimes you may need to run a Tarantool instance locally, for example, for test -purposes. ``tt`` runs in a local environment if it finds a ``tt.yaml`` configuration -file in the current directory or any of its enclosing directories. +Options that can be passed when :ref:`starting a Tarantool instance `: -To set up a local environment for ``tt``: +.. option:: -h, --help -1. Create a home directory for the environment. + Print an annotated list of all available options and exit. -2. Run ``tt init`` in this directory: +.. option:: --help-env-list - .. code-block:: console + **Since:** :doc:`3.0.0 `. - $ tt init - • Environment config is written to 'tt.yaml' + Show a list of :ref:`environment variables ` that can be used to configure Tarantool. -This command creates a default ``tt`` configuration file ``tt.yaml`` for a local -environment and the directories for instance files, control sockets, logs, and other -artifacts: +.. _index-tarantool_version: -.. code-block:: console +.. option:: -v, -V, --version - $ ls - bin distfiles include instances.enabled modules templates tt.yaml + Print the product name and version. -To run a Tarantool instance in the local environment: + **Example** -1. Place the instance file into the ``instances.enabled/`` directory inside the - current directory. + .. code-block:: console -2. Run ``tt start``: + $ tarantool --version + Tarantool Enterprise 3.0.0-beta1-2-gcbb569b4c-r607-gc64 + Target: Linux-x86_64-RelWithDebInfo + ... - .. code-block:: console + In this example: - $ tt start + * ``3.0.0`` is a Tarantool version. + Tarantool follows semantic versioning, which is described in the :ref:`Tarantool release policy ` section. -After the instance is started, you can find its artifacts in their locations inside -the current directory: + * ``Target`` is the platform Tarantool is built on. + Platform-specific details may follow this line. -* logs in ``var/log/`` -* snapshots and write-ahead logs in ``var/lib/`` -* control sockets and PID files in ``var/run/`` -To work with a local environment from a directory outside it, issue ``tt`` calls with -the ``-L`` or ``--local`` argument with the path to this environment as its value: +.. option:: -c, --config PATH -.. code-block:: console + **Since:** :doc:`3.0.0 `. - $ tt --local=/usr/tt/env/ start + Set a path to a :ref:`YAML configuration file `. + You can also configure this value using the ``TT_CONFIG`` environment variable. -.. _admin-start_stop_instance-systemd: + See also: :ref:`Starting an instance using the tarantool command ` -Using systemd tools -------------------- +.. option:: -n, --name INSTANCE -If you start an instance using ``systemd`` tools, like this (the instance name -is ``my_app``): + **Since:** :doc:`3.0.0 `. -.. code-block:: console + Set the name of an instance to run. + You can also configure this value using the ``TT_INSTANCE_NAME`` environment variable. - $ systemctl start tarantool@my_app - $ ps axuf|grep my_app - taranto+ 5350 1.3 0.3 1448872 7736 ? Ssl 20:05 0:28 tarantool my_app.lua + See also: :ref:`Starting an instance using the tarantool command ` -This actually calls ``tarantoolctl`` like in case of -``tarantoolctl start my_app``. -To enable ``my_app`` instance for auto-load during system startup, say: +.. option:: -i -.. code-block:: console + Enter an :ref:`interactive mode `. - $ systemctl enable tarantool@my_app + **Example** -To stop a running ``my_app`` instance with ``systemctl``, run: + .. code-block:: console -.. code-block:: console + $ tarantool -i - $ systemctl stop tarantool@my_app -To restart a running ``my_app`` instance with ``systemctl``, run: +.. option:: -e EXPR -.. code-block:: console + Execute the 'EXPR' string. See also: `lua man page `_. + + **Example** + + .. code-block:: console + + $ tarantool -e 'print("Hello, world!")' + Hello, world! + +.. option:: -l NAME + + Require the 'NAME' library. See also: `lua man page `_. + + **Example** + + .. code-block:: console + + $ tarantool -l luatest.coverage script.lua + +.. option:: -j cmd + + Perform a LuaJIT control command. See also: `Command Line Options `_. + + **Example** + + .. code-block:: console + + $ tarantool -j off app.lua + +.. option:: -b ... + + Save or list bytecode. See also: `Command Line Options `_. + + **Example** + + .. code-block:: console + + $ tarantool -b test.lua test.out + +.. option:: -d SCRIPT + + Activate a debugging session for 'SCRIPT'. See also: `luadebug.lua `_. + + **Example** + + .. code-block:: console + + $ tarantool -d app.lua + + +.. option:: -- + + Stop handling options. See also: `lua man page `_. + + +.. option:: - - $ systemctl restart tarantool@my_app + Stop handling options and execute the standard input as a file. See also: `lua man page `_. \ No newline at end of file diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml index 4bb62dba8e..3615996871 100644 --- a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml @@ -34,7 +34,7 @@ groups: iproto: listen: 127.0.0.1:3302 storage-b: - leader: storage-b-002 + leader: storage-b-001 instances: storage-b-001: iproto: diff --git a/doc/code_snippets/snippets/sharding/templates/basic/MANIFEST.yaml b/doc/code_snippets/snippets/sharding/templates/basic/MANIFEST.yaml new file mode 100644 index 0000000000..10f9d4792f --- /dev/null +++ b/doc/code_snippets/snippets/sharding/templates/basic/MANIFEST.yaml @@ -0,0 +1,30 @@ +description: Basic template +vars: + - prompt: A name of the user for replication + name: replicator_user_name + default: replicator + + - prompt: A password for a replicator user + name: replicator_user_password + re: ^\w+$ + + - prompt: A name of the user for sharding + name: sharding_user_name + default: storage + + - prompt: A password for a sharding user + name: sharding_user_password + re: ^\w+$ + + - prompt: The number of buckets in a cluster + name: sharding_bucket_count + default: '1000' + + - prompt: A listen URI + name: listen_uri + default: '127.0.0.1' +include: + - config.yaml + - instances.yaml + - router.lua + - storage.lua diff --git a/doc/code_snippets/snippets/sharding/templates/basic/config.yaml.tt.template b/doc/code_snippets/snippets/sharding/templates/basic/config.yaml.tt.template new file mode 100644 index 0000000000..12a5043ea9 --- /dev/null +++ b/doc/code_snippets/snippets/sharding/templates/basic/config.yaml.tt.template @@ -0,0 +1,55 @@ +credentials: + users: + {{.replicator_user_name}}: + password: '{{.replicator_user_password}}' + roles: [replication] + {{.sharding_user_name}}: + password: '{{.sharding_user_password}}' + roles: [super] + +iproto: + advertise: + peer: {{.replicator_user_name}}@ + sharding: {{.sharding_user_name}}@ + +sharding: + bucket_count: {{.sharding_bucket_count}} + +groups: + storages: + app: + module: storage + sharding: + roles: [storage] + replication: + failover: manual + replicasets: + storage-a: + leader: storage-a-001 + instances: + storage-a-001: + iproto: + listen: {{.listen_uri}}:3301 + storage-a-002: + iproto: + listen: {{.listen_uri}}:3302 + storage-b: + leader: storage-b-001 + instances: + storage-b-001: + iproto: + listen: {{.listen_uri}}:3303 + storage-b-002: + iproto: + listen: {{.listen_uri}}:3304 + routers: + app: + module: router + sharding: + roles: [router] + replicasets: + router-a: + instances: + router-a-001: + iproto: + listen: {{.listen_uri}}:3300 diff --git a/doc/code_snippets/snippets/sharding/templates/basic/instances.yaml b/doc/code_snippets/snippets/sharding/templates/basic/instances.yaml new file mode 100644 index 0000000000..368bc16cb6 --- /dev/null +++ b/doc/code_snippets/snippets/sharding/templates/basic/instances.yaml @@ -0,0 +1,5 @@ +storage-a-001: +storage-a-002: +storage-b-001: +storage-b-002: +router-a-001: \ No newline at end of file diff --git a/doc/code_snippets/snippets/sharding/templates/basic/router.lua b/doc/code_snippets/snippets/sharding/templates/basic/router.lua new file mode 100644 index 0000000000..ee21da0e5f --- /dev/null +++ b/doc/code_snippets/snippets/sharding/templates/basic/router.lua @@ -0,0 +1,5 @@ +local vshard = require('vshard') + +vshard.router.bootstrap() + +-- Router code -- diff --git a/doc/code_snippets/snippets/sharding/templates/basic/storage.lua b/doc/code_snippets/snippets/sharding/templates/basic/storage.lua new file mode 100644 index 0000000000..87f063f033 --- /dev/null +++ b/doc/code_snippets/snippets/sharding/templates/basic/storage.lua @@ -0,0 +1 @@ +-- Storage code -- diff --git a/doc/concepts/configuration.rst b/doc/concepts/configuration.rst index 9f59a65d4b..e235c58710 100644 --- a/doc/concepts/configuration.rst +++ b/doc/concepts/configuration.rst @@ -468,205 +468,6 @@ For example, you can place snapshots and write-ahead logs on different hard driv To learn more about the persistence mechanism in Tarantool, see the :ref:`Persistence ` section. -.. _configuration_run_instance: - -Starting Tarantool instances ----------------------------- - -.. _configuration_run_instance_tt: - -Starting instances using the tt utility -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The :ref:`tt ` utility is the recommended way to start Tarantool instances. - -Instance files or directories are organized into applications in the ``instances_enabled`` directory. -The example below shows how a :ref:`layout ` of the application called ``app`` might look: - -.. code-block:: none - - ├── tt.yaml - └── instances.enabled - └── app - ├── config.yaml - ├── myapp.lua - └── instances.yml - -* ``config.yaml`` is a :ref:`configuration file `. -* ``myapp.lua`` is a Lua script containing an :ref:`application to load `. -* ``instances.yml`` specifies :ref:`instances ` to run in the current environment. - This file might look as follows: - - .. literalinclude:: /code_snippets/snippets/replication/instances.enabled/manual_leader/instances.yml - :language: yaml - :dedent: - -To start all instances, use the ``tt start app`` command: - - .. code-block:: console - - $ tt start app - • Starting an instance [app:instance001]... - • Starting an instance [app:instance002]... - • Starting an instance [app:instance003]... - -Then, you can connect to Tarantool instances by its names using the ``tt connect`` command. -You can learn more from the :ref:`Starting and stopping instances ` section. - - - -.. _configuration_run_instance_tarantool: - -Starting an instance using the tarantool command -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``tarantool`` command provides additional :ref:`options ` that might be helpful for development purposes. -Below is the syntax for starting a Tarantool instance configured in a file: - -.. code-block:: console - - $ tarantool [OPTION ...] --name INSTANCE_NAME --config CONFIG_FILE_PATH - -The command below starts ``instance001`` configured in the ``config.yaml`` file: - -.. code-block:: console - - $ tarantool --name instance001 --config config.yaml - - -.. _configuration_command_options: - -Command-line options -******************** - -Options that can be passed when :ref:`starting a Tarantool instance `: - -.. option:: -h, --help - - Print an annotated list of all available options and exit. - -.. option:: --help-env-list - - **Since:** :doc:`3.0.0 `. - - Show a list of :ref:`environment variables ` that can be used to configure Tarantool. - -.. _index-tarantool_version: - -.. option:: -v, -V, --version - - Print the product name and version. - - **Example** - - .. code-block:: console - - % tarantool --version - Tarantool 3.0.0-entrypoint-746-g36ef3fb43 - Target: Darwin-arm64-Release - ... - - In this example: - - * ``3.0.0`` is a Tarantool version. - Tarantool follows semantic versioning, which is described in the :ref:`Tarantool release policy ` section. - - * ``Target`` is the platform Tarantool is built on. - Platform-specific details may follow this line. - - -.. option:: -c, --config PATH - - **Since:** :doc:`3.0.0 `. - - Set a path to a :ref:`YAML configuration file `. - You can also configure this value using the ``TT_CONFIG`` environment variable. - - See also: :ref:`Starting an instance using the tarantool command ` - -.. option:: -n, --name INSTANCE - - **Since:** :doc:`3.0.0 `. - - Set the name of an instance to run. - You can also configure this value using the ``TT_INSTANCE_NAME`` environment variable. - - See also: :ref:`Starting an instance using the tarantool command ` - - -.. option:: -i - - Enter an :ref:`interactive mode `. - - **Example** - - .. code-block:: console - - % tarantool -i - - -.. option:: -e EXPR - - Execute the 'EXPR' string. See also: `lua man page `_. - - **Example** - - .. code-block:: console - - % tarantool -e "print('Hello, world!')" - Hello, world! - -.. option:: -l NAME - - Require the 'NAME' library. See also: `lua man page `_. - - **Example** - - .. code-block:: console - - % tarantool -l luatest.coverage script.lua - -.. option:: -j cmd - - Perform a LuaJIT control command. See also: `Command Line Options `_. - - **Example** - - .. code-block:: console - - % tarantool -j off app.lua - -.. option:: -b ... - - Save or list bytecode. See also: `Command Line Options `_. - - **Example** - - .. code-block:: console - - % tarantool -b test.lua test.out - -.. option:: -d SCRIPT - - Activate a debugging session for 'SCRIPT'. See also: `luadebug.lua `_. - - **Example** - - .. code-block:: console - - % tarantool -d app.lua - - -.. option:: -- - - Stop handling options. See also: `lua man page `_. - - -.. option:: - - - Stop handling options and execute the standard input as a file. See also: `lua man page `_. - - .. toctree:: diff --git a/doc/concepts/configuration/configuration_etcd.rst b/doc/concepts/configuration/configuration_etcd.rst index 664afb7545..2afc2217ba 100644 --- a/doc/concepts/configuration/configuration_etcd.rst +++ b/doc/concepts/configuration/configuration_etcd.rst @@ -112,7 +112,7 @@ Starting Tarantool instances ---------------------------- The :ref:`tt ` utility is the recommended way to start Tarantool instances. -You can learn how to do this from the :ref:`Starting instances using the tt utility ` section. +You can learn how to do this from the :ref:`Starting and stopping instances ` section. You can also use the ``tarantool`` command to :ref:`start a Tarantool instance `. In this case, you can eliminate creating a :ref:`local etcd configuration ` and provide etcd connection settings using the ``TT_CONFIG_ETCD_ENDPOINTS`` and ``TT_CONFIG_ETCD_PREFIX`` :ref:`environment variables `.