Skip to content

Conversation

samyxdev
Copy link
Contributor

@samyxdev samyxdev commented Jan 18, 2024

Hi everyone,

I noticed that LiteLLM doesn't support the pl_tags argument of the PromptLayer Python API, only metadata is supported for now. In this PR, I propose to add the support of such tags by adding through the metadata argument.

Here is a working example with tags and metadata:

        response = completion(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": "Hi 👋 - i'm ai21"}],
            temperature=0.2,
            max_tokens=20,
            metadata={"model": "ai21", "pl_tags": ["env:dev"]},
        )

I also added a test to ensure the support of tags.

Note:
I thought about adding the pl_tags argument by itself (i.e. not in metadata) but it seems that LiteLLM is adding every extra argument as an extra_body argument for OpenAI and Azure, which then leads to errors as the latter providers don't recognize this extra pl_tags argument: https://github.com/BerriAI/litellm/blob/main/litellm/utils.py#L4072

Copy link

vercel bot commented Jan 18, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 30, 2024 3:54pm
litellm-dashboard ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 30, 2024 3:54pm

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requested 1 change, can you send a screenshot of this logging working on Promptlayer

@samyxdev
Copy link
Contributor Author

samyxdev commented Jan 21, 2024

Thanks for the feedback @ishaan-jaff. I just realized a few issues in the test and PromptLayer integrations:

  • Tests are not handled properly here, as pytest will consider these tests as passing even though they are actually failing, as the top try-catch block isn't raising any exceptions. I would recommend avoiding the double try-catch and go for a regular assertion or going with a pytest.fail(f"Error occurred: {e}") as observed in some of other tests, instead of a print(e). Is there any structure you would like to follow ?
  • PromptLayer integration is considering the call as succeding even though it didn't. Here is a self-explanatory example of logging output: Prompt Layer Logging: success - final response object: {"message":"No PromptLayer API key provided","success":false} (as I forgot to set the API key in this case) > Fixed
  • python-openai >= 1.0.0 is returning Pydantic objects instead of json dicts. Because of that, we cannot simply use what is done here. > Fixed

Here is a screenshot of the working example:
image

@samyxdev
Copy link
Contributor Author

samyxdev commented Feb 20, 2024

Hello @ishaan-jaff, do you see any other blockers ? Looking forward to get this merged ! :)

@ishaan-jaff
Copy link
Contributor

Hi @samyxdev will review this today

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants