Skip to content

Commit 83bdcb6

Browse files
authored
add FAQ doc under 'serving' (#5946)
1 parent 12a5995 commit 83bdcb6

File tree

2 files changed

+13
-0
lines changed

2 files changed

+13
-0
lines changed

docs/source/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,7 @@ Documentation
8484
serving/usage_stats
8585
serving/integrations
8686
serving/tensorizer
87+
serving/faq
8788

8889
.. toctree::
8990
:maxdepth: 1

docs/source/serving/faq.rst

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
Frequently Asked Questions
2+
========================
3+
4+
Q: How can I serve multiple models on a single port using the OpenAI API?
5+
6+
A: Assuming that you're referring to using OpenAI compatible server to serve multiple models at once, that is not currently supported, you can run multiple instances of the server (each serving a different model) at the same time, and have another layer to route the incoming request to the correct server accordingly.
7+
8+
----------------------------------------
9+
10+
Q: Which model to use for offline inference embedding?
11+
12+
A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model

0 commit comments

Comments
 (0)