-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Description
Describe the bug
I have configured all of the variables in the config.yaml file:
system_message: |
You are Open Interpret...capable of **any** task.
# local: false
temperature: 1
context_window: 31000
max_tokens: 3000
OPENAI_API_KEY: xxxxxxxxxxxxxxxxxxx
AZURE_API_BASE: https://xxxxxxxx.openai.azure.com/
AZURE_API_KEY: "6xxxxxxxxxxxxxxxb9c"
API_TYPE: "azure"
MODEL: "azure/model-name-from-deployment"
AZURE_API_VERSION: "2023-08-01-preview"
AZURE_API_VERSION doesn't take from the config file, so I run export AZURE_API_VERSION=2023-08-01-preview before running interpreter. These values work in other contexts, although where I've connected to an Azure endpoint in the past, the part corresponding to api_base continued deployments/xxxx-deploymentName-xxxx/chat/completions?api-version=2023-08-01-preview. But from what I've been able to find here, the api_base is supposed to end
The output of interpreter:
(base) simon@MacBookM1Pro7 ~ % interpreter
▌ Model set to ``
Open Interpreter will require approval before running code.
Use interpreter -y to bypass this.
Press CTRL-C to exit.
> i
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Python Version: 3.11.5
Pip Version: 23.3.1
Open-interpreter Version: cmd:Interpreter, pkg: 0.1.15
OS Version and Architecture: macOS-14.1.1-arm64-arm-64bit
CPU Info: arm
RAM Info: 32.00 GB, used: 15.22, free: 1.65
Interpreter Info
Vision: False
Model:
Function calling: False
Context window: 31000
Max tokens: 3000
Auto run: False
API base: None
Local: False
Curl output: Not local
Traceback (most recent call last):
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/main.py", line 300, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/utils.py", line 1821, in get_llm_provider
raise e
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/utils.py", line 1818, in get_llm_provider
raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/{model}',..)` Learn more: https://docs.litellm.ai/docs/providers")
ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/',..)` Learn more: https://docs.litellm.ai/docs/providers
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/simon/anaconda3/bin/interpreter", line 8, in <module>
sys.exit(cli())
^^^^^
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/core/core.py", line 24, in cli
cli(self)
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/cli/cli.py", line 268, in cli
interpreter.chat()
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/core/core.py", line 86, in chat
for _ in self._streaming_chat(message=message, display=display):
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/core/core.py", line 106, in _streaming_chat
yield from terminal_interface(self, message)
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/terminal_interface/terminal_interface.py", line 115, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/core/core.py", line 127, in _streaming_chat
yield from self._respond()
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/core/core.py", line 162, in _respond
yield from respond(self)
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/core/respond.py", line 49, in respond
for chunk in interpreter._llm(messages_for_llm):
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/llm/convert_to_coding_llm.py", line 65, in coding_llm
for chunk in text_llm(messages):
^^^^^^^^^^^^^^^^^^
File "/Users/simon/anaconda3/lib/python3.11/site-packages/interpreter/llm/setup_text_llm.py", line 144, in base_llm
return litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/utils.py", line 962, in wrapper
raise e
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/utils.py", line 899, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/timeout.py", line 53, in wrapper
result = future.result(timeout=local_timeout_duration)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simon/anaconda3/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/simon/anaconda3/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/timeout.py", line 42, in async_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/main.py", line 1403, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/utils.py", line 3574, in exception_type
raise e
File "/Users/simon/anaconda3/lib/python3.11/site-packages/litellm/utils.py", line 3556, in exception_type
raise APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model)
litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/',..)` Learn more: https://docs.litellm.ai/docs/providers
It would be great to see a (redacted) functional config for Azure with gpt-4-32k
I am not sure if this is a big or a problem with my configuration, but it does work when I use it with OpenAI rather than Azure.
Reproduce
Install open interpreter
Set config.yaml as above
Expected behavior
The same functionality as with OpenAI
Screenshots
No response
Open Interpreter version
0.1.15
Python version
3.11.5
Operating System name and version
MacOS latest
Additional context
No response