Skip to content

[Question] Purpose of completion ID field #13

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
MillionthOdin16 opened this issue Apr 3, 2023 · 1 comment
Closed

[Question] Purpose of completion ID field #13

MillionthOdin16 opened this issue Apr 3, 2023 · 1 comment

Comments

@MillionthOdin16
Copy link
Contributor

I have a question about the id field in the data returned from the completions endpoint. I see that there's a unique ID that identifies what completion a message is part of, and I'm wondering if this is only data for the client, or whether it has additional functionality.

Eventually I'm hoping to have a a couple different models running on my server and I'm trying to figure out if there's a mechanism that exists for a sort of chat functionality with unique contexts. Llama.cpp recently gained the ability to run multiple instances at once without much overhead, so I'm looking for a way to keep a unique context between a couple conversation 'threads'.

Is there any mechanism, or is there a plan for one? Just want to make sure I'm not missing something if it's built already xD

{
  "id": "cmpl-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
  "object": "text_completion",
  "created": 1679561337,
  "model": "models/7B/...",
  "choices": [
    {
      "text": "Q: Name the planets in the solar system? A: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.",
      "index": 0,
      "logprobs": None,
      "finish_reason": "stop"
    }
  ],
...
}
@abetlen
Copy link
Owner

abetlen commented Apr 3, 2023

It's currently just for the client and not saved anywhere.

We could include some kind of instance id as part of the unique id scheme, I don't think it would effect interoperability with services that expect OpenAI response objects as the id scheme isn't part of their public API (AFAIK).

As for chat mode and continuing completion, it looks like this is on the horizon for llama.cpp but is still not completely supported by the api.
This is something I'm following so hopefully once the full api for managing state is complete we can integrate chat sessions.

@abetlen abetlen closed this as completed Apr 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants