diff --git a/docs/language-model-setup/hosted-models/anyscale.mdx b/docs/language-model-setup/hosted-models/anyscale.mdx
new file mode 100644
index 0000000000..0338a6634f
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/anyscale.mdx
@@ -0,0 +1,60 @@
+---
+title: Anyscale
+---
+
+To use Open Interpreter with a model from Anyscale, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model anyscale/
+```
+
+```python Python
+from interpreter import interpreter
+
+# Set the model to use from AWS Bedrock:
+interpreter.llm.model = "anyscale/"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+We support the following completion models from Anyscale:
+
+- Llama 2 7B Chat
+- Llama 2 13B Chat
+- Llama 2 70B Chat
+- Mistral 7B Instruct
+- CodeLlama 34b Instruct
+
+
+
+```bash Terminal
+interpreter --model anyscale/meta-llama/Llama-2-7b-chat-hf
+interpreter --model anyscale/meta-llama/Llama-2-13b-chat-hf
+interpreter --model anyscale/meta-llama/Llama-2-70b-chat-hf
+interpreter --model anyscale/mistralai/Mistral-7B-Instruct-v0.1
+interpreter --model anyscale/codellama/CodeLlama-34b-Instruct-hf
+```
+
+```python Python
+interpreter.llm.model = "anyscale/meta-llama/Llama-2-7b-chat-hf"
+interpreter.llm.model = "anyscale/meta-llama/Llama-2-13b-chat-hf"
+interpreter.llm.model = "anyscale/meta-llama/Llama-2-70b-chat-hf"
+interpreter.llm.model = "anyscale/mistralai/Mistral-7B-Instruct-v0.1"
+interpreter.llm.model = "anyscale/codellama/CodeLlama-34b-Instruct-hf"
+
+```
+
+
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| -------------------- | -------------------------------------- | --------------------------------------------------------------------------- |
+| `ANYSCALE_API_KEY` | The API key for your Anyscale account. | [Anyscale Account Settings](https://app.endpoints.anyscale.com/credentials) |
diff --git a/docs/language-model-setup/hosted-models/aws-sagemaker.mdx b/docs/language-model-setup/hosted-models/aws-sagemaker.mdx
new file mode 100644
index 0000000000..cd35812636
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/aws-sagemaker.mdx
@@ -0,0 +1,70 @@
+---
+title: AWS Sagemaker
+---
+
+To use Open Interpreter with a model from AWS Sagemaker, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model sagemaker/
+```
+
+```python Python
+# Sagemaker requires boto3 to be installed on your machine:
+!pip install boto3
+
+from interpreter import interpreter
+
+interpreter.llm.model = "sagemaker/"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+We support the following completion models from AWS Sagemaker:
+
+- Meta Llama 2 7B
+- Meta Llama 2 7B (Chat/Fine-tuned)
+- Meta Llama 2 13B
+- Meta Llama 2 13B (Chat/Fine-tuned)
+- Meta Llama 2 70B
+- Meta Llama 2 70B (Chat/Fine-tuned)
+- Your Custom Huggingface Model
+
+
+
+```bash Terminal
+
+interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b
+interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b-f
+interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b
+interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f
+interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b
+interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f
+interpreter --model sagemaker/
+```
+
+```python Python
+interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b"
+interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b-f"
+interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b"
+interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f"
+interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b"
+interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f"
+interpreter.llm.model = "sagemaker/"
+```
+
+
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| ----------------------- | ----------------------------------------------- | ----------------------------------------------------------------------------------- |
+| `AWS_ACCESS_KEY_ID` | The API access key for your AWS account. | [AWS Account Overview -> Security Credintials](https://console.aws.amazon.com/) |
+| `AWS_SECRET_ACCESS_KEY` | The API secret access key for your AWS account. | [AWS Account Overview -> Security Credintials](https://console.aws.amazon.com/) |
+| `AWS_REGION_NAME` | The AWS region you want to use | [AWS Account Overview -> Navigation bar -> Region](https://console.aws.amazon.com/) |
diff --git a/docs/language-model-setup/hosted-models/baseten.mdx b/docs/language-model-setup/hosted-models/baseten.mdx
new file mode 100644
index 0000000000..45ce940002
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/baseten.mdx
@@ -0,0 +1,57 @@
+---
+title: Baseten
+---
+
+To use Open Interpreter with Baseten, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model baseten/
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "baseten/"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+We support the following completion models from Baseten:
+
+- Falcon 7b (qvv0xeq)
+- Wizard LM (q841o8w)
+- MPT 7b Base (31dxrj3)
+
+
+
+```bash Terminal
+
+interpreter --model baseten/qvv0xeq
+interpreter --model baseten/q841o8w
+interpreter --model baseten/31dxrj3
+
+
+```
+
+```python Python
+interpreter.llm.model = "baseten/qvv0xeq"
+interpreter.llm.model = "baseten/q841o8w"
+interpreter.llm.model = "baseten/31dxrj3"
+
+
+```
+
+
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| -------------------- | --------------- | -------------------------------------------------------------------------------------------------------- |
+| BASETEN_API_KEY'` | Baseten API key | [Baseten Dashboard -> Settings -> Account -> API Keys](https://app.baseten.co/settings/account/api_keys) |
diff --git a/docs/language-model-setup/hosted-models/cloudflare.mdx b/docs/language-model-setup/hosted-models/cloudflare.mdx
new file mode 100644
index 0000000000..79201c2aaa
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/cloudflare.mdx
@@ -0,0 +1,59 @@
+---
+title: Cloudflare Workers AI
+---
+
+To use Open Interpreter with the Cloudflare Workers AI API, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model cloudflare/
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "cloudflare/"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+We support the following completion models from Cloudflare Workers AI:
+
+- Llama-2 7b chat fp16
+- Llama-2 7b chat int8
+- Mistral 7b instruct v0.1
+- CodeLlama 7b instruct awq
+
+
+
+```bash Terminal
+
+interpreter --model cloudflare/@cf/meta/llama-2-7b-chat-fp16
+interpreter --model cloudflare/@cf/meta/llama-2-7b-chat-int8
+interpreter --model @cf/mistral/mistral-7b-instruct-v0.1
+interpreter --model @hf/thebloke/codellama-7b-instruct-awq
+
+```
+
+```python Python
+interpreter.llm.model = "cloudflare/@cf/meta/llama-2-7b-chat-fp16"
+interpreter.llm.model = "cloudflare/@cf/meta/llama-2-7b-chat-int8"
+interpreter.llm.model = "@cf/mistral/mistral-7b-instruct-v0.1"
+interpreter.llm.model = "@hf/thebloke/codellama-7b-instruct-awq"
+
+```
+
+
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| ----------------------- | -------------------------- | ---------------------------------------------------------------------------------------------- |
+| `CLOUDFLARE_API_KEY'` | Cloudflare API key | [Cloudflare Profile Page -> API Tokens](https://dash.cloudflare.com/profile/api-tokens) |
+| `CLOUDFLARE_ACCOUNT_ID` | Your Cloudflare account ID | [Cloudflare Dashboard -> Overview page -> API section](https://www.perplexity.ai/settings/api) |
diff --git a/docs/language-model-setup/hosted-models/deepinfra.mdx b/docs/language-model-setup/hosted-models/deepinfra.mdx
new file mode 100644
index 0000000000..1b56f10025
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/deepinfra.mdx
@@ -0,0 +1,64 @@
+---
+title: DeepInfra
+---
+
+To use Open Interpreter with DeepInfra, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model deepinfra/
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "deepinfra/"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+We support the following completion models from DeepInfra:
+
+- Llama-2 70b chat hf
+- Llama-2 7b chat hf
+- Llama-2 13b chat hf
+- CodeLlama 34b instruct awq
+- Mistral 7b instruct v0.1
+- jondurbin/airoboros I2 70b gpt3 1.4.1
+
+
+
+```bash Terminal
+
+interpreter --model deepinfra/meta-llama/Llama-2-70b-chat-hf
+interpreter --model deepinfra/meta-llama/Llama-2-7b-chat-hf
+interpreter --model deepinfra/meta-llama/Llama-2-13b-chat-hf
+interpreter --model deepinfra/codellama/CodeLlama-34b-Instruct-hf
+interpreter --model deepinfra/mistral/mistral-7b-instruct-v0.1
+interpreter --model deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1
+
+```
+
+```python Python
+interpreter.llm.model = "deepinfra/meta-llama/Llama-2-70b-chat-hf"
+interpreter.llm.model = "deepinfra/meta-llama/Llama-2-7b-chat-hf"
+interpreter.llm.model = "deepinfra/meta-llama/Llama-2-13b-chat-hf"
+interpreter.llm.model = "deepinfra/codellama/CodeLlama-34b-Instruct-hf"
+interpreter.llm.model = "deepinfra/mistral-7b-instruct-v0.1"
+interpreter.llm.model = "deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1"
+
+```
+
+
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| -------------------- | ----------------- | ---------------------------------------------------------------------- |
+| `DEEPINFRA_API_KEY'` | DeepInfra API key | [DeepInfra Dashboard -> API Keys](https://deepinfra.com/dash/api_keys) |
diff --git a/docs/language-model-setup/hosted-models/huggingface.mdx b/docs/language-model-setup/hosted-models/huggingface.mdx
new file mode 100644
index 0000000000..a8b2d8f187
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/huggingface.mdx
@@ -0,0 +1,48 @@
+---
+title: Huggingface
+---
+
+To use Open Interpreter with Huggingface models, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model huggingface/
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "huggingface/"
+interpreter.chat()
+```
+
+
+
+You may also need to specify your Huggingface api base url:
+
+
+```bash Terminal
+interpreter --api_base
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.api_base = "https://my-endpoint.huggingface.cloud"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+Open Interpreter should work with almost any text based hugging face model.
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| ---------------------- | --------------------------- | ---------------------------------------------------------------------------------- |
+| `HUGGINGFACE_API_KEY'` | Huggingface account API key | [Huggingface -> Settings -> Access Tokens](https://huggingface.co/settings/tokens) |
diff --git a/docs/language-model-setup/hosted-models/mistral-api.mdx b/docs/language-model-setup/hosted-models/mistral-api.mdx
new file mode 100644
index 0000000000..42b8b4ac24
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/mistral-api.mdx
@@ -0,0 +1,53 @@
+---
+title: Mistral AI API
+---
+
+To use Open Interpreter with the Mistral API, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model mistral/
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "mistral/"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+We support the following completion models from the Mistral API:
+
+- mistral-tiny
+- mistral-small
+- mistral-medium
+
+
+
+```bash Terminal
+
+interpreter --model mistral/mistral-tiny
+interpreter --model mistral/mistral-small
+interpreter --model mistral/mistral-medium
+```
+
+```python Python
+interpreter.llm.model = "mistral/mistral-tiny"
+interpreter.llm.model = "mistral/mistral-small"
+interpreter.llm.model = "mistral/mistral-medium"
+```
+
+
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| -------------------- | -------------------------------------------- | -------------------------------------------------- |
+| `MISTRAL_API_KEY` | The Mistral API key from Mistral API Console | [Mistral API Console](https://console.mistral.ai/) |
diff --git a/docs/language-model-setup/hosted-models/nlp-cloud.mdx b/docs/language-model-setup/hosted-models/nlp-cloud.mdx
new file mode 100644
index 0000000000..de1adaee83
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/nlp-cloud.mdx
@@ -0,0 +1,28 @@
+---
+title: NLP Cloud
+---
+
+To use Open Interpreter with NLP Cloud, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model dolphin
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "dolphin"
+interpreter.chat()
+```
+
+
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| -------------------- | ----------------- | ----------------------------------------------------------------- |
+| `NLP_CLOUD_API_KEY'` | NLP Cloud API key | [NLP Cloud Dashboard -> API KEY](https://nlpcloud.com/home/token) |
diff --git a/docs/language-model-setup/hosted-models/palm.mdx b/docs/language-model-setup/hosted-models/palm.mdx
new file mode 100644
index 0000000000..dc6078e085
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/palm.mdx
@@ -0,0 +1,28 @@
+---
+title: PaLM API - Google
+---
+
+To use Open Interpreter with PaLM, you must `pip install -q google-generativeai`, then set the `model` flag in Open Interpreter:
+
+
+
+```bash Terminal
+interpreter --model palm/chat-bison
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "palm/chat-bison"
+interpreter.chat()
+```
+
+
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| -------------------- | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
+| `PALM_API_KEY` | The PaLM API key from Google Generative AI Developers dashboard. | [Google Generative AI Developers Dashboard](https://developers.generativeai.google/) |
diff --git a/docs/language-model-setup/hosted-models/perplexity.mdx b/docs/language-model-setup/hosted-models/perplexity.mdx
new file mode 100644
index 0000000000..6af649d5c7
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/perplexity.mdx
@@ -0,0 +1,80 @@
+---
+title: Perplexity
+---
+
+To use Open Interpreter with the Perplexity API, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model perplexity/
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "perplexity/"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+We support the following completion models from the Perplexity API:
+
+- pplx-7b-chat
+- pplx-70b-chat
+- pplx-7b-online
+- pplx-70b-online
+- codellama-34b-instruct
+- llama-2-13b-chat
+- llama-2-70b-chat
+- mistral-7b-instruct
+- openhermes-2-mistral-7b
+- openhermes-2.5-mistral-7b
+- pplx-7b-chat-alpha
+- pplx-70b-chat-alpha
+
+
+
+```bash Terminal
+
+interpreter --model perplexity/pplx-7b-chat
+interpreter --model perplexity/pplx-70b-chat
+interpreter --model perplexity/pplx-7b-online
+interpreter --model perplexity/pplx-70b-online
+interpreter --model perplexity/codellama-34b-instruct
+interpreter --model perplexity/llama-2-13b-chat
+interpreter --model perplexity/llama-2-70b-chat
+interpreter --model perplexity/mistral-7b-instruct
+interpreter --model perplexity/openhermes-2-mistral-7b
+interpreter --model perplexity/openhermes-2.5-mistral-7b
+interpreter --model perplexity/pplx-7b-chat-alpha
+interpreter --model perplexity/pplx-70b-chat-alpha
+```
+
+```python Python
+interpreter.llm.model = "perplexity/pplx-7b-chat"
+interpreter.llm.model = "perplexity/pplx-70b-chat"
+interpreter.llm.model = "perplexity/pplx-7b-online"
+interpreter.llm.model = "perplexity/pplx-70b-online"
+interpreter.llm.model = "perplexity/codellama-34b-instruct"
+interpreter.llm.model = "perplexity/llama-2-13b-chat"
+interpreter.llm.model = "perplexity/llama-2-70b-chat"
+interpreter.llm.model = "perplexity/mistral-7b-instruct"
+interpreter.llm.model = "perplexity/openhermes-2-mistral-7b"
+interpreter.llm.model = "perplexity/openhermes-2.5-mistral-7b"
+interpreter.llm.model = "perplexity/pplx-7b-chat-alpha"
+interpreter.llm.model = "perplexity/pplx-70b-chat-alpha"
+```
+
+
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| ----------------------- | ------------------------------------ | ----------------------------------------------------------------- |
+| `PERPLEXITYAI_API_KEY'` | The Perplexity API key from pplx-api | [Perplexity API Settings](https://www.perplexity.ai/settings/api) |
diff --git a/docs/language-model-setup/hosted-models/togetherai.mdx b/docs/language-model-setup/hosted-models/togetherai.mdx
new file mode 100644
index 0000000000..68b4d66065
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/togetherai.mdx
@@ -0,0 +1,32 @@
+---
+title: Together AI
+---
+
+To use Open Interpreter with Together AI, set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model together_ai/
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "together_ai/"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+All models on Together AI are supported.
+
+# Required Environment Variables
+
+Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
+
+| Environment Variable | Description | Where to Find |
+| --------------------- | --------------------------------------------- | ------------------------------------------------------------------------------------------- |
+| `TOGETHERAI_API_KEY'` | The TogetherAI API key from the Settings page | [TogetherAI -> Profile -> Settings -> API Keys](https://api.together.xyz/settings/api-keys) |
diff --git a/docs/language-model-setup/hosted-models/vllm.mdx b/docs/language-model-setup/hosted-models/vllm.mdx
new file mode 100644
index 0000000000..e2dc2e311b
--- /dev/null
+++ b/docs/language-model-setup/hosted-models/vllm.mdx
@@ -0,0 +1,44 @@
+---
+title: vLLM
+---
+
+To use Open Interpreter with vLLM, you will need to:
+
+1. `pip install vllm`
+2. Set the api_base flag:
+
+
+
+```bash Terminal
+interpreter --api_base
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.api_base = ""
+interpreter.chat()
+```
+
+
+
+3. Set the `model` flag:
+
+
+
+```bash Terminal
+interpreter --model vllm/
+```
+
+```python Python
+from interpreter import interpreter
+
+interpreter.llm.model = "vllm/"
+interpreter.chat()
+```
+
+
+
+# Supported Models
+
+All models from VLLM should be supported