Skip to content

Accessing openai.ChatCompletion, no longer supported in openai>=1.0.0 - how to update the code to use the new API? (AZURE) #497

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Marcelfoxion opened this issue Feb 25, 2025 · 8 comments

Comments

@Marcelfoxion
Copy link

I'm using Azure API keys.

The error in backend logs indicates that the [openai.ChatCompletion] API has been removed in [openai>=1.0.0].

You need to update your code to use the new API.

You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28

A detailed migration guide is available here: openai/openai-python#742

@abi
Copy link
Owner

abi commented Feb 25, 2025

We don't plan to officially support Azure in this repo at the moment. But will leave this issue open for others to chime in.

@Marcelfoxion
Copy link
Author

From my experience it's hard to make it work on Azure. Would someone be able to suggest how to make it work locally on ollama ? Is there a PR for ollama available that works?

@abi
Copy link
Owner

abi commented Feb 27, 2025

See #354 (comment)

@Marcelfoxion
Copy link
Author

Does it mean no one else made this work recently on azure with openai new API

@ashsmith88
Copy link

@Marcelfoxion - i have just linked Azure on a fork this morning if you are still looking I can push it and share the link

@Marcelfoxion
Copy link
Author

@ashsmith88 Yes pleas that would be awesome. Thank you!

@Marcelfoxion
Copy link
Author

Hey @ashsmith88 is it this azure PR? here https://github.com/ashsmith88/screenshot-to-code/tree/main

If you have a sec please let me know if this is what you was about to share. As I havent heard from you since.

Cheers

@ashsmith88
Copy link

Apologies @Marcelfoxion - been really busy and forgot to reply!

I have just pushed my (rather hacky) changes here: https://github.com/ashsmith88/screenshot-to-code/tree/quick-branch

The branch works with Azure, but also has additional changes as I have got it to do a 2nd request with OpenAI to get the code in react which I can use as a template in my development - so you are probably only interested in some of the backend changes.

From the Azure point of view, I added an env arg: USE_AZURE=True and then put my Azure keys into the two args:

OPENAI_API_KEY=
OPENAI_BASE_URL=

in config.py

USE_AZURE = os.environ.get("USE_AZURE", False)

The updated this function to look like this:

async def stream_openai_response(
    messages: List[ChatCompletionMessageParam],
    api_key: str,
    base_url: str | None,
    callback: Callable[[str], Awaitable[None]],
    model: Llm,
) -> Completion:
    start_time = time.time()
    if USE_AZURE: # <--- Imported from config
        client = AsyncAzureOpenAI(api_key=api_key, azure_endpoint=base_url, api_version="2024-08-01-preview")
        params = {
            "model": "gpt-4o", 
            "messages": messages,
            "timeout": 600,
        }
    else:
        client = AsyncOpenAI(api_key=api_key, base_url=base_url)
        params = {
            "model": model.value,
            "messages": messages,
            "timeout": 600,
        }

You will obviously need to make sure you have an Azure deployment and update anything above (i.e. I have hard coded my model as wasn't worried)

This isn't a production ready fix at all, but works perfectly for me when using locally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants