Skip to content

Commit f897282

Browse files
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
1 parent bc2fe15 commit f897282

File tree

1 file changed

+6
-6
lines changed
  • AudioQnA/docker_compose/intel/cpu/xeon

1 file changed

+6
-6
lines changed

AudioQnA/docker_compose/intel/cpu/xeon/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -134,16 +134,16 @@ docker compose -f compose.yaml down
134134

135135
In the context of deploying an AudioQnA pipeline on an Intel® Xeon® platform, we can pick and choose different large language model serving frameworks, or single English TTS/multi-language TTS component. The table below outlines the various configurations that are available as part of the application. These configurations can be used as templates and can be extended to different components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
136136

137-
| File | Description |
138-
| -------------------------------------------------- | ----------------------------------------------------------------------------------------- |
139-
| [compose.yaml](./compose.yaml) | Default compose file using vllm as serving framework and redis as vector database |
140-
| [compose_tgi.yaml](./compose_tgi.yaml) | The LLM serving framework is TGI. All other configurations remain the same as the default |
141-
| [compose_multilang.yaml](./compose_multilang.yaml) | The TTS component is GPT-SoVITS. All other configurations remain the same as the default |
137+
| File | Description |
138+
| -------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
139+
| [compose.yaml](./compose.yaml) | Default compose file using vllm as serving framework and redis as vector database |
140+
| [compose_tgi.yaml](./compose_tgi.yaml) | The LLM serving framework is TGI. All other configurations remain the same as the default |
141+
| [compose_multilang.yaml](./compose_multilang.yaml) | The TTS component is GPT-SoVITS. All other configurations remain the same as the default |
142142
| [compose_remote.yaml](./compose_remote.yaml) | The LLM used is hosted on a remote server and an endpoint is used to access this model. Additional environment variables need to be set before running. See [instructions](#running-llm-models-deployed-on-remote-servers-with-compose_remoteyaml) below. |
143143

144144
### Running LLM models deployed on remote servers with `compose_remote.yaml`
145145

146-
To run the LLM model on a remote server, the environment variable `LLM_MODEL_ID` may need to be overwritten, and two new environment variables `REMOTE_ENDPOINT` and `OPENAI_API_KEY` need to be set. An example endpoint is https://api.inference.example.com, but the actual value will depend on how it is set up on the remote server. The key is used to access the remote server.
146+
To run the LLM model on a remote server, the environment variable `LLM_MODEL_ID` may need to be overwritten, and two new environment variables `REMOTE_ENDPOINT` and `OPENAI_API_KEY` need to be set. An example endpoint is https://api.inference.example.com, but the actual value will depend on how it is set up on the remote server. The key is used to access the remote server.
147147

148148
```bash
149149
export LLM_MODEL_ID=<name-of-llm-model-card>

0 commit comments

Comments
 (0)