Skip to content

Commit 26d193f

Browse files
committed
Add nixl to PATH
Signed-off-by: Batsheva Black <[email protected]>
1 parent 16e9d11 commit 26d193f

File tree

4 files changed

+205
-243
lines changed

4 files changed

+205
-243
lines changed

docker/common/install_nixl.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,3 +42,4 @@ rm -rf nixl* # Remove NIXL source tree to save space
4242
export LD_LIBRARY_PATH=$OLD_LD_LIBRARY_PATH
4343

4444
echo "export LD_LIBRARY_PATH=/opt/nvidia/nvda_nixl/lib/${ARCH_NAME}:/opt/nvidia/nvda_nixl/lib64:\$LD_LIBRARY_PATH" >> "${ENV}"
45+
echo "export PATH=/opt/nvidia/nvda_nixl/bin:\$PATH" >> "${ENV}"

examples/disaggregated/README.md

Lines changed: 39 additions & 239 deletions
Original file line numberDiff line numberDiff line change
@@ -1,81 +1,34 @@
1-
# Disaggregated Serving
1+
# TRT-LLM Disaggregated Serving
22

3-
The execution method of disaggregated serving relies on the `trtllm-serve` command. Specifically, compared to the standard usage of `trtllm-serve`, serving requires running this command multiple times to separately start the router and workers (including context and generation) serving components. This document focuses on this approach and provides a detailed guide on how to use it.
3+
To run TRT-LLM in disaggregated mode, you must first launch context (prefill) and generation (decode) servers using `trtllm-serve`.
4+
Depending on your deployment environment, this can be done in different ways.
45

5-
Please note that disaggregated serving is currently an experimental feature, so the usage described in this document may change in the future.
6+
## Launching context and generation servers using multiple independent `trtllm-serve` commands
67

7-
## Startup Procedure
8+
You can use multiple `trtllm-serve` commands to launch the context and generation servers that will be used
9+
for disaggregated serving. For example, you could launch two context servers and one generation servers as follows:
810

9-
### Configuration File
10-
11-
The `trtllm-serve` command supports the `extra-llm-config.yaml` parameter. In the extra LLM configuration file, the `cache_transceiver_config` field is specifically used for disaggregated service. It is mainly used to specify additional parameters required for the KV cache transmission process.
12-
13-
```yaml
14-
cache_transceiver_config:
15-
# KV cache transmission backend. Valid options include `DEFAULT` (i.e., UCX), `UCX`, `NIXL`.
16-
backend: <str>
17-
# KV cache buffer size. Set it ≥ the maximum ISL (Input Sequence Length) for best performance.
18-
max_tokens_in_buffer: <int>
1911
```
12+
echo -e "disable_overlap_scheduler: True\ncache_transceiver_config:\n max_num_tokens: 2048" > context_extra-llm-api-config.yml
13+
echo -e "cache_transceiver_config:\n max_num_tokens: 2048" > gen_extra-llm-api-config.yml
2014
21-
The following is an example, consisting of the `ctx_extra-llm-api-config.yaml` and `gen_extra-llm-api-config.yaml` files needed in the sections below.
22-
23-
```yaml
24-
# ctx_extra-llm-api-config.yaml
25-
26-
# The overlap scheduler for context servers is currently disabled, as it is
27-
# not yet supported in disaggregated context server architectures.
28-
disable_overlap_scheduler: True
29-
cache_transceiver_config:
30-
backend: UCX
31-
max_tokens_in_buffer: 2048
15+
export TRTLLM_USE_UCX_KVCACHE=1
16+
#Context servers
17+
CUDA_VISIBLE_DEVICES=0 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8001 --backend pytorch --extra_llm_api_options ./context_extra-llm-api-config.yml &> log_ctx_0 &
18+
CUDA_VISIBLE_DEVICES=1 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8002 --backend pytorch --extra_llm_api_options ./context_extra-llm-api-config.yml &> log_ctx_1 &
19+
#Generation servers
20+
CUDA_VISIBLE_DEVICES=2 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8003 --backend pytorch --extra_llm_api_options ./gen_extra-llm-api-config.yml &> log_gen_0 &
3221
```
33-
34-
```yaml
35-
# gen_extra-llm-api-config.yaml
36-
37-
cache_transceiver_config:
38-
backend: UCX
39-
max_tokens_in_buffer: 2048
40-
```
41-
42-
### Basic Usage
43-
44-
For non-SLURM clusters - particularly in single-node, multi-GPU setups, it is recommended to use standard mode. In such cases, the system does not enforce limits on process creation or termination.
45-
46-
Suppose we have three CUDA devices on the same machine. The first two devices are used to launch one context model each, and the third device is used to launch one generation model. In this case, the following commands need to be executed.
47-
48-
```bash
49-
# Start context servers
50-
CUDA_VISIBLE_DEVICES=0 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
51-
--host localhost --port 8001 \
52-
--extra_llm_api_options ./ctx_extra-llm-api-config.yaml &> log_ctx_0 &
53-
54-
CUDA_VISIBLE_DEVICES=1 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
55-
--host localhost --port 8002 \
56-
--extra_llm_api_options ./ctx_extra-llm-api-config.yaml &> log_ctx_1 &
57-
58-
# Start generation server
59-
CUDA_VISIBLE_DEVICES=2 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
60-
--host localhost --port 8003 \
61-
--extra_llm_api_options ./gen_extra-llm-api-config.yaml &> log_gen_0 &
62-
```
63-
6422
Once the context and generation servers are launched, you can launch the disaggregated
6523
server, which will accept requests from clients and do the orchestration between context
6624
and generation servers. The disaggregated server can be launched with:
6725

68-
```bash
69-
# Start proxy
26+
```
7027
trtllm-serve disaggregated -c disagg_config.yaml
7128
```
72-
7329
where `disagg_config.yaml` contains information about the context and generation servers. For the current example,
7430
it would look like:
75-
76-
```yaml
77-
# disagg_config.yaml
78-
31+
```
7932
hostname: localhost
8033
port: 8000
8134
backend: pytorch
@@ -90,215 +43,62 @@ generation_servers:
9043
- "localhost:8003"
9144
```
9245

93-
Clients can then send requests to the disaggregated server at `localhost:8000`, which is an OpenAI API compatible endpoint.
94-
95-
96-
#### Sending requests to the disaggregated server
97-
98-
Once the context, generation and disaggregated servers are launched, you can send requests to the disaggregated server using curl:
99-
100-
```bash
101-
curl http://localhost:8000/v1/completions \
102-
-H "Content-Type: application/json" \
103-
-d '{
104-
"model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
105-
"prompt": "NVIDIA is a great company because",
106-
"max_tokens": 16,
107-
"temperature": 0
108-
}' -w "\n"
109-
```
110-
111-
Or using the provided client parsing the prompts from a file and sending request to the disaggregated server specified in the `disagg_config.yaml` file at the `chat` endpoint:
112-
113-
```
114-
python3 ./clients/disagg_client.py -c disagg_config.yaml -p ./clients/prompts.json -e chat
115-
```
46+
Clients can then send requests to the disaggregated server at `localhost:8000`, which is an OpenAI compatible endpoint.
11647

117-
### Launching disaggregated servers on SLURM clusters
48+
## Launching context and generation servers using MPI
11849

119-
To simplify usage, TensorRT-LLM internally relies on MPI spawning processes. However, some clusters do not offer such process flexibility. In these cases, we provide the `trtllm-llmapi-launch` tool to launch all processes at once. Therefore, when using TensorRT-LLM on a Slurm cluster, please refer to the following method.
120-
121-
#### Single-Node Execution
122-
123-
After starting the node and entering interactive mode, you can run the following command to prevent process spawning.
124-
125-
```bash
126-
# Start context servers
127-
CUDA_VISIBLE_DEVICES=0 trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
128-
--host localhost --port 8001 \
129-
--extra_llm_api_options ./ctx_extra-llm-api-config.yaml &> log_ctx_0 &
130-
131-
CUDA_VISIBLE_DEVICES=1 trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
132-
--host localhost --port 8002 \
133-
--extra_llm_api_options ./ctx_extra-llm-api-config.yaml &> log_ctx_1 &
134-
135-
# Start generation server
136-
CUDA_VISIBLE_DEVICES=2 trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
137-
--host localhost --port 8003 \
138-
--extra_llm_api_options ./gen_extra-llm-api-config.yaml &> log_gen_0 &
139-
140-
# Start proxy
141-
trtllm-llmapi-launch trtllm-serve disaggregated -c disagg_config.yaml
50+
One can also launch all context and generation servers using MPI. This can be done by issuing the following command:
14251
```
143-
144-
#### Multi-Node Execution
145-
146-
If the model you are running cannot fit within a single node and requires multiple nodes,
147-
we introduce the startup method using [srun](https://slurm.schedmd.com/srun.html) to run parallel jobs.
148-
149-
```bash
150-
srun -A <account> -p <partition> -t <time> -N <num_nodes> --ntasks-per-node=<tasks_per_node> \
151-
--container-image=<container_image> \
152-
--container-mounts=<mount_paths> \
153-
--mpi=<mpi_type> \
154-
bash -c '<your_command>'
155-
```
156-
157-
When using `srun`, the `-N` and `--ntasks-per-node` options are two critical parameters that
158-
determine how your job is distributed across the cluster.
159-
160-
- `-N <num_nodes>`: Specifies how many physical nodes to use.
161-
- `--ntasks-per-node=<num_tasks>`: Specifies how many tasks to run on each node.
162-
163-
Together, they define the total number of tasks your job will run:
164-
165-
$$
166-
\text{Total tasks} = N \times \text{ntasks-per-node}
167-
$$
168-
169-
Therefore, the command can be written as follows:
170-
171-
```bash
172-
# The `container_image` must have the TensorRT-LLM wheel package pre-installed.
173-
# Once the task is successfully launched, an API service will be available externally at http://host_ip:PORT.
174-
# Launch a context with `tp_size=8` using two 4-GPU nodes.
175-
srun -A <account> -p <partition> -t <time> \
176-
-N 2 --ntasks-per-node=4 \
177-
--container-image=<container_image> \
178-
--container-mounts=<mount_paths> \
179-
--mpi=pmix \
180-
bash -c "trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --tp_size 8 --host 0.0.0.0 --port $PORT --extra_llm_api_options $WORK/ctx_extra-llm-api-config.yaml"
181-
182-
# Launch a generation with `tp_size=4` using one 4-GPU node.
183-
srun -A <account> -p <partition> -t <time> \
184-
-N 1 --ntasks-per-node=4 \
185-
--container-image=<container_image> \
186-
--container-mounts=<mount_paths> \
187-
--mpi=pmix \
188-
bash -c "trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --tp_size 4 --host 0.0.0.0 --port $PORT --extra_llm_api_options $WORK/gen_extra-llm-api-config.yaml"
189-
190-
# Launch a proxy.
191-
# The above-mentioned value needs to be replaced with the IP address of the host machine accessible to external
192-
# clients, and filled in the `disagg_config.yaml` file.
193-
srun -A <account> -p <partition> -t <time> \
194-
-N 1 --ntasks-per-node=1 \
195-
--container-image=<container_image> \
196-
--container-mounts=<mount_paths> \
197-
--mpi=pmix \
198-
bash -c "trtllm-llmapi-launch trtllm-serve disaggregated -c $WORK/disagg_config.yaml"
199-
```
200-
201-
Additionally, we offer a fully executable script—please refer to [Disaggregated SLURM Scripts](./slurm/simple_example/).
202-
203-
204-
## Dynamic scaling (Prototype)
205-
206-
Currently, trtllm supports dynamic addition and removal of servers by leveraging ETCD. To enable this feature, you should start the context and generation servers with an additional flag ```--metadata_server_config_file``` and ```--server_role```.
207-
Before launching the context and generation servers, you should first start the ETCD server. By default, the ETCD server listens for client requests at ```localhost:2379```.
208-
209-
```bash
210-
etcd
211-
```
212-
213-
After this, you can enable the dynamic scaling feature for the use case above as follows:
214-
215-
```bash
216-
# Context servers
217-
CUDA_VISIBLE_DEVICES=0 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8001 --server_role CONTEXT --extra_llm_api_options ./ctx_extra-llm-api-config.yaml --metadata_server_config_file ./metadata_config.yaml &> log_ctx_0 &
218-
CUDA_VISIBLE_DEVICES=1 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8002 --server_role CONTEXT --extra_llm_api_options ./ctx_extra-llm-api-config.yaml --metadata_server_config_file ./metadata_config.yaml &> log_ctx_1 &
219-
220-
# Generation servers
221-
CUDA_VISIBLE_DEVICES=2 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8003 --server_role GENERATION --extra_llm_api_options ./gen_extra-llm-api-config.yaml --metadata_server_config_file ./metadata_config.yaml &> log_gen_0 &
222-
```
223-
224-
As for the disaggregated server, you should also specify the --metadata_server_config_file like the following
225-
226-
```bash
227-
trtllm-serve disaggregated -c disagg_config.yaml -m ./metadata_config.yaml
228-
```
229-
230-
The metadata_config file looks like
231-
```yaml
232-
hostname: "localhost"
233-
port: 2379
234-
health_check_timeout: 5.0
235-
refersh_interval: 10.0
236-
```
237-
238-
The ```hostname``` and ```port``` must match those used when starting the ETCD server. The ```health_check_timeout``` parameter specifies how long a server will be considered dead if no healthy response is received. By default, trtllm will perform two checks before marking a server as dead. The ```refresh_interval``` parameter determines how often the latest server list is fetched from the ETCD server.
239-
240-
### Dynamically adding servers
241-
242-
Users can add servers by directly launching them with trtllm-serve. For example, you can start an additional generation server as follows:
243-
244-
```bash
245-
CUDA_VISIBLE_DEVICES=3 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
246-
--host localhost --port 8004 \
247-
--server_role GENERATION \
248-
--extra_llm_api_options ./gen_extra-llm-api-config.yaml \
249-
--metadata_server_config_file ./metadata_config.yaml &> log_gen_0 &
250-
```
251-
252-
TensorRT-LLM will automatically register any newly launched server with the ETCD server, allowing the router to send new requests to the added server.
253-
254-
### Dynamically removing servers
255-
256-
When removing servers, special attention is required in the current version. You need to first remove the corresponding key from the ETCD server. After you see the log message "Server xxxx is removed," you can then safely shut down the server. This part will be improved soon.
257-
258-
## Startup Procedure with MPI Worker (Deprecated)
259-
260-
In the past, we used `disaggregated_mpi_worker` to allow context nodes and generation nodes to operate within the same MPI world. However, this approach conflicts with the dynamic node addition and removal functionality. As a result, disaggregated_mpi_worker has been marked as deprecated, and the corresponding examples will be gradually removed.
261-
262-
```bash
52+
export TRTLLM_USE_MPI_KVCACHE=1
26353
mpirun -n <total_num_ranks> trtllm-serve disaggregated_mpi_worker -c disagg_config.yaml
26454
```
265-
where `total_num_ranks` is the sum of `TP*PP` for all context and generation servers. For the example above, `total_num_ranks` is 3
55+
where `<total_num_ranks>` is the sum of `TP*PP` for all context and generation servers. For the example above, `total_num_ranks` is 3
26656
since `TP` and `PP` is 1 for the two context and one generation server.
26757

26858
The `disagg_config.yaml` file must now contain the configuration parameters of the context and generation servers. For example,
26959
it could look like:
27060

271-
```yaml
61+
```
27262
hostname: localhost
27363
port: 8000
27464
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
27565
backend: "pytorch"
66+
use_cuda_graph: False
27667
disable_overlap_scheduler: True
27768
context_servers:
27869
num_instances: 2
27970
tensor_parallel_size: 1
28071
pipeline_parallel_size: 1
28172
kv_cache_config:
28273
free_gpu_memory_fraction: 0.9
283-
cache_transceiver_config:
284-
backend: UCX
28574
urls:
28675
- "localhost:8001"
28776
- "localhost:8002"
28877
generation_servers:
28978
num_instances: 1
29079
tensor_parallel_size: 1
29180
pipeline_parallel_size: 1
292-
cache_transceiver_config:
293-
backend: UCX
29481
urls:
29582
- "localhost:8003"
29683
```
29784

29885
Once the context and generation servers are launched, you can again launch the disaggregated server with
299-
300-
```bash
86+
```
30187
trtllm-serve disaggregated -c disagg_config.yaml
30288
```
30389

304-
The MPI communication backend for KV cache transfer has been deprecated and may not be supported in the future. When using the MPI backend, the environment variable `TRTLLM_USE_MPI_KVCACHE=1` should be set to avoid conflicts between mpi4py and KV cache transfer.
90+
## Sending requests to the disaggregated server
91+
92+
Once the context, generation and disaggregated servers are launched, you can send requests to the disaggregated server using curl:
93+
```
94+
curl http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{
95+
"model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
96+
"prompt": "NVIDIA is a great company because",
97+
"max_tokens": 16,
98+
"temperature": 0
99+
}' -w "\n"
100+
```
101+
Or using the provided client parsing the prompts from a file and sending request to the disaggregated server specified in the `disagg_config.yaml` file at the `chat` endpoint:
102+
```
103+
python3 ./clients/disagg_client.py -c disagg_config.yaml -p ./clients/prompts.json -e chat
104+
```

examples/disaggregated/disagg_config.yaml

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,22 +3,19 @@ port: 8000
33
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
44
free_gpu_memory_fraction: 0.25
55
backend: "pytorch"
6+
use_cuda_graph: False
67
disable_overlap_scheduler: True
78
context_servers:
89
num_instances: 1
910
tensor_parallel_size: 1
1011
pipeline_parallel_size: 1
1112
kv_cache_config:
1213
free_gpu_memory_fraction: 0.2
13-
cache_transceiver_config:
14-
backend: "DEFAULT"
1514
urls:
1615
- "localhost:8001"
1716
generation_servers:
1817
num_instances: 1
1918
tensor_parallel_size: 1
2019
pipeline_parallel_size: 1
21-
cache_transceiver_config:
22-
backend: "DEFAULT"
2320
urls:
2421
- "localhost:8002"

0 commit comments

Comments
 (0)