diff --git a/docs/code-execution/computer-api.mdx b/docs/code-execution/computer-api.mdx new file mode 100644 index 0000000000..902b164f1f --- /dev/null +++ b/docs/code-execution/computer-api.mdx @@ -0,0 +1,5 @@ +--- +title: Computer API +--- + +Coming soon... diff --git a/docs/code-execution/custom-languages.mdx b/docs/code-execution/custom-languages.mdx new file mode 100644 index 0000000000..035a287665 --- /dev/null +++ b/docs/code-execution/custom-languages.mdx @@ -0,0 +1,5 @@ +--- +title: Custom Languages +--- + +Coming soon... diff --git a/docs/code-execution/settings.mdx b/docs/code-execution/settings.mdx new file mode 100644 index 0000000000..d1ee791935 --- /dev/null +++ b/docs/code-execution/settings.mdx @@ -0,0 +1,5 @@ +--- +title: Settings +--- + +Coming soon... diff --git a/docs/code-execution/usage.mdx b/docs/code-execution/usage.mdx new file mode 100644 index 0000000000..62948f16bc --- /dev/null +++ b/docs/code-execution/usage.mdx @@ -0,0 +1,5 @@ +--- +title: Usage +--- + +Coming soon... diff --git a/docs/guides/advanced-terminal-usage.mdx b/docs/guides/advanced-terminal-usage.mdx new file mode 100644 index 0000000000..745723c6cd --- /dev/null +++ b/docs/guides/advanced-terminal-usage.mdx @@ -0,0 +1,11 @@ +--- +title: Advanced Terminal Usage +--- + +Magic commands can be used to control the interpreter's behavior in interactive mode: + +- `%verbose [true/false]`: Toggle verbose mode +- `%reset`: Reset the current session +- `%undo`: Remove the last message and its response +- `%save_message [path]`: Save messages to a JSON file +- `%load_message [path]`: Load messages from a JSON file diff --git a/docs/guides/basic-usage.mdx b/docs/guides/basic-usage.mdx new file mode 100644 index 0000000000..56fe7525be --- /dev/null +++ b/docs/guides/basic-usage.mdx @@ -0,0 +1,150 @@ +--- +title: Basic Usage +--- + + + + + + Try Open Interpreter without installing anything on your computer + + + + An example implementation of Open Interpreter's streaming capabilities + + + + +--- + +### Interactive Chat + +To start an interactive chat in your terminal, either run `interpreter` from the command line: + +```shell +interpreter +``` + +Or `interpreter.chat()` from a .py file: + +```python +interpreter.chat() +``` + +--- + +### Programmatic Chat + +For more precise control, you can pass messages directly to `.chat(message)` in Python: + +```python +interpreter.chat("Add subtitles to all videos in /videos.") + +# ... Displays output in your terminal, completes task ... + +interpreter.chat("These look great but can you make the subtitles bigger?") + +# ... +``` + +--- + +### Start a New Chat + +In your terminal, Open Interpreter behaves like ChatGPT and will not remember previous conversations. Simply run `interpreter` to start a new chat: + +```shell +interpreter +``` + +In Python, Open Interpreter remembers conversation history. If you want to start fresh, you can reset it: + +```python +interpreter.messages = [] +``` + +--- + +### Save and Restore Chats + +In your terminal, Open Interpreter will save previous conversations to `/Open Interpreter/conversations/`. + +You can resume any of them by running `--conversations`. Use your arrow keys to select one , then press `ENTER` to resume it. + +```shell +interpreter --conversations +``` + +In Python, `interpreter.chat()` returns a List of messages, which can be used to resume a conversation with `interpreter.messages = messages`: + +```python +# Save messages to 'messages' +messages = interpreter.chat("My name is Killian.") + +# Reset interpreter ("Killian" will be forgotten) +interpreter.messages = [] + +# Resume chat from 'messages' ("Killian" will be remembered) +interpreter.messages = messages +``` + +--- + +### Configure Default Settings + +We save default settings to a configuration file which can be edited by running the following command: + +```shell +interpreter --config +``` + +You can use this to set your default language model, system message (custom instructions), max budget, etc. + +**Note:** The Python library will also inherit settings from this config file, but you can only change it by running `interpreter --config` or navigating to `/Open Interpreter/config.yaml` and editing it manually. + +--- + +### Customize System Message + +In your terminal, modify the system message by [editing your configuration file as described here](#configure-default-settings). + +In Python, you can inspect and configure Open Interpreter's system message to extend its functionality, modify permissions, or give it more context. + +```python +interpreter.system_message += """ +Run shell commands with -y so the user doesn't have to confirm them. +""" +print(interpreter.system_message) +``` + +--- + +### Change your Language Model + +Open Interpreter uses [LiteLLM](https://docs.litellm.ai/docs/providers/) to connect to language models. + +You can change the model by setting the model parameter: + +```shell +interpreter --model gpt-3.5-turbo +interpreter --model claude-2 +interpreter --model command-nightly +``` + +In Python, set the model on the object: + +```python +interpreter.llm.model = "gpt-3.5-turbo" +``` + +[Find the appropriate "model" string for your language model here.](https://docs.litellm.ai/docs/providers/) diff --git a/docs/guides/demos.mdx b/docs/guides/demos.mdx new file mode 100644 index 0000000000..b75d8e2b8d --- /dev/null +++ b/docs/guides/demos.mdx @@ -0,0 +1,68 @@ +--- +title: Demos +--- + +### Vision Mode + +#### Recreating a Tailwind Component + +Creating a dropdown menu in Tailwind from a single screenshot: + +https://twitter.com/hellokillian/status/1723106008061587651 + +#### Recreating the ChatGPT interface using GPT-4V: + +https://twitter.com/chilang/status/1724577200135897255 + +### OS Mode + +#### Playing Music + +Open Interpreter playing some Lofi using OS mode: + + + +#### Open Interpreter Chatting with Open Interpreter + +OS mode creating and chatting with a local instance of Open Interpreter: + +https://twitter.com/FieroTy/status/1746639975234560101 + +#### Controlling an Arduino + +Reading temperature and humidity from an Arudino: + +https://twitter.com/vindiww/status/1744252926321942552 + +#### Music Creation + +OS mode using Logic Pro X to record a piano song and play it back: + +https://twitter.com/FieroTy/status/1744203268451111035 + +#### Generating images in Everart.ai + +Open Interpreter descibing pictures it wants to make, then creating them using OS mode: + +https://twitter.com/skirano/status/1747670816437735836 + +#### Open Interpreter Conversing With ChatGPT + +OS mode has a conversation with ChatGPT and even asks it "What do you think about human/AI interaction?" + +https://twitter.com/skirano/status/1747772471770583190 + +#### Sending an Email with Gmail + +OS mode launches Safari, composes an email, and sends it: + +https://twitter.com/FieroTy/status/1743437525207928920 diff --git a/docs/guides/multiple-instances.mdx b/docs/guides/multiple-instances.mdx new file mode 100644 index 0000000000..c4afa55801 --- /dev/null +++ b/docs/guides/multiple-instances.mdx @@ -0,0 +1,5 @@ +--- +title: Multiple Instances +--- + +Coming soon... diff --git a/docs/guides/running-locally.mdx b/docs/guides/running-locally.mdx new file mode 100644 index 0000000000..e35afb87f2 --- /dev/null +++ b/docs/guides/running-locally.mdx @@ -0,0 +1,44 @@ +--- +title: Running Locally +--- + +Check out this awesome video by Mike Bird on how to run Open Interpreter locally! He goes over three different methods for setting up a local language model to run with Open Interpreter. + + + +## How to use Open Interpreter locally + +### Ollama + +1. Download Ollama - https://ollama.ai/download +2. `ollama run dolphin-mixtral:8x7b-v2.6` +3. `interpreter --model ollama/dolphin-mixtral:8x7b-v2.6` + +# Jan.ai + +1. Download Jan - [Jan.ai](http://jan.ai/) +2. Download model from Hub +3. Enable API server + 1. Settings + 2. Advanced + 3. Enable API server +4. Select Model to use +5. `interpreter --api_base http://localhost:1337/v1 --model mixtral-8x7b-instruct` + +# llamafile + +1. Download or make a llamafile - https://github.com/Mozilla-Ocho/llamafile +2. `chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile` +3. `./mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile` +4. `interpreter --api_base https://localhost:8080/v1` + +Make sure that Xcode is installed for Apple Silicon diff --git a/docs/language-model-setup/custom-models.mdx b/docs/language-models/custom-models.mdx similarity index 100% rename from docs/language-model-setup/custom-models.mdx rename to docs/language-models/custom-models.mdx diff --git a/docs/language-model-setup/hosted-models/ai21.mdx b/docs/language-models/hosted-models/ai21.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/ai21.mdx rename to docs/language-models/hosted-models/ai21.mdx diff --git a/docs/language-model-setup/hosted-models/anthropic.mdx b/docs/language-models/hosted-models/anthropic.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/anthropic.mdx rename to docs/language-models/hosted-models/anthropic.mdx diff --git a/docs/language-model-setup/hosted-models/anyscale.mdx b/docs/language-models/hosted-models/anyscale.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/anyscale.mdx rename to docs/language-models/hosted-models/anyscale.mdx diff --git a/docs/language-model-setup/hosted-models/aws-sagemaker.mdx b/docs/language-models/hosted-models/aws-sagemaker.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/aws-sagemaker.mdx rename to docs/language-models/hosted-models/aws-sagemaker.mdx diff --git a/docs/language-model-setup/hosted-models/azure.mdx b/docs/language-models/hosted-models/azure.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/azure.mdx rename to docs/language-models/hosted-models/azure.mdx diff --git a/docs/language-model-setup/hosted-models/baseten.mdx b/docs/language-models/hosted-models/baseten.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/baseten.mdx rename to docs/language-models/hosted-models/baseten.mdx diff --git a/docs/language-model-setup/hosted-models/cloudflare.mdx b/docs/language-models/hosted-models/cloudflare.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/cloudflare.mdx rename to docs/language-models/hosted-models/cloudflare.mdx diff --git a/docs/language-model-setup/hosted-models/cohere.mdx b/docs/language-models/hosted-models/cohere.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/cohere.mdx rename to docs/language-models/hosted-models/cohere.mdx diff --git a/docs/language-model-setup/hosted-models/deepinfra.mdx b/docs/language-models/hosted-models/deepinfra.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/deepinfra.mdx rename to docs/language-models/hosted-models/deepinfra.mdx diff --git a/docs/language-model-setup/hosted-models/gpt-4-setup.mdx b/docs/language-models/hosted-models/gpt-4-setup.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/gpt-4-setup.mdx rename to docs/language-models/hosted-models/gpt-4-setup.mdx diff --git a/docs/language-model-setup/hosted-models/huggingface.mdx b/docs/language-models/hosted-models/huggingface.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/huggingface.mdx rename to docs/language-models/hosted-models/huggingface.mdx diff --git a/docs/language-model-setup/hosted-models/mistral-api.mdx b/docs/language-models/hosted-models/mistral-api.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/mistral-api.mdx rename to docs/language-models/hosted-models/mistral-api.mdx diff --git a/docs/language-model-setup/hosted-models/nlp-cloud.mdx b/docs/language-models/hosted-models/nlp-cloud.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/nlp-cloud.mdx rename to docs/language-models/hosted-models/nlp-cloud.mdx diff --git a/docs/language-model-setup/hosted-models/openai.mdx b/docs/language-models/hosted-models/openai.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/openai.mdx rename to docs/language-models/hosted-models/openai.mdx diff --git a/docs/language-model-setup/hosted-models/openrouter.mdx b/docs/language-models/hosted-models/openrouter.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/openrouter.mdx rename to docs/language-models/hosted-models/openrouter.mdx diff --git a/docs/language-model-setup/hosted-models/palm.mdx b/docs/language-models/hosted-models/palm.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/palm.mdx rename to docs/language-models/hosted-models/palm.mdx diff --git a/docs/language-model-setup/hosted-models/perplexity.mdx b/docs/language-models/hosted-models/perplexity.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/perplexity.mdx rename to docs/language-models/hosted-models/perplexity.mdx diff --git a/docs/language-model-setup/hosted-models/petals.mdx b/docs/language-models/hosted-models/petals.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/petals.mdx rename to docs/language-models/hosted-models/petals.mdx diff --git a/docs/language-model-setup/hosted-models/replicate.mdx b/docs/language-models/hosted-models/replicate.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/replicate.mdx rename to docs/language-models/hosted-models/replicate.mdx diff --git a/docs/language-model-setup/hosted-models/togetherai.mdx b/docs/language-models/hosted-models/togetherai.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/togetherai.mdx rename to docs/language-models/hosted-models/togetherai.mdx diff --git a/docs/language-model-setup/hosted-models/vertex-ai.mdx b/docs/language-models/hosted-models/vertex-ai.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/vertex-ai.mdx rename to docs/language-models/hosted-models/vertex-ai.mdx diff --git a/docs/language-model-setup/hosted-models/vllm.mdx b/docs/language-models/hosted-models/vllm.mdx similarity index 100% rename from docs/language-model-setup/hosted-models/vllm.mdx rename to docs/language-models/hosted-models/vllm.mdx diff --git a/docs/language-model-setup/introduction.mdx b/docs/language-models/introduction.mdx similarity index 100% rename from docs/language-model-setup/introduction.mdx rename to docs/language-models/introduction.mdx diff --git a/docs/language-model-setup/local-models/best-practices.mdx b/docs/language-models/local-models/best-practices.mdx similarity index 100% rename from docs/language-model-setup/local-models/best-practices.mdx rename to docs/language-models/local-models/best-practices.mdx diff --git a/docs/language-model-setup/local-models/custom-endpoint.mdx b/docs/language-models/local-models/custom-endpoint.mdx similarity index 95% rename from docs/language-model-setup/local-models/custom-endpoint.mdx rename to docs/language-models/local-models/custom-endpoint.mdx index acfebcaef6..c70d37058e 100644 --- a/docs/language-model-setup/local-models/custom-endpoint.mdx +++ b/docs/language-models/local-models/custom-endpoint.mdx @@ -5,7 +5,6 @@ title: Custom Endpoint Simply set `api_base` to any OpenAI compatible server: - ```bash Terminal interpreter --api_base ``` @@ -17,4 +16,4 @@ interpreter.llm.api_base = "" interpreter.chat() ``` - \ No newline at end of file + diff --git a/docs/language-model-setup/local-models/janai.mdx b/docs/language-models/local-models/janai.mdx similarity index 86% rename from docs/language-model-setup/local-models/janai.mdx rename to docs/language-models/local-models/janai.mdx index 89c495126c..fed645a7f0 100644 --- a/docs/language-model-setup/local-models/janai.mdx +++ b/docs/language-models/local-models/janai.mdx @@ -6,7 +6,7 @@ Jan.ai is an open-source platform for running local language models on your comp To run Open Interpreter with Jan.ai, follow these steps: -1. Install the Jan.ai Desktop Application on your computer. At the time of writing, you will need to download a nightly build, as the standard application does not come with a local server. You can find instructions for installing a nightly build [here](https://jan.ai/install/nightly/). +1. [Install](https://jan.ai/) the Jan.ai Desktop Application on your computer. 2. Once installed, you will need to install a language model. Click the 'Hub' icon on the left sidebar (the four squares icon). Click the 'Download' button next to the model you would like to install, and wait for it to finish installing before continuing. diff --git a/docs/language-model-setup/local-models/llamafile.mdx b/docs/language-models/local-models/llamafile.mdx similarity index 100% rename from docs/language-model-setup/local-models/llamafile.mdx rename to docs/language-models/local-models/llamafile.mdx diff --git a/docs/language-model-setup/local-models/lm-studio.mdx b/docs/language-models/local-models/lm-studio.mdx similarity index 100% rename from docs/language-model-setup/local-models/lm-studio.mdx rename to docs/language-models/local-models/lm-studio.mdx diff --git a/docs/language-model-setup/local-models/ollama.mdx b/docs/language-models/local-models/ollama.mdx similarity index 100% rename from docs/language-model-setup/local-models/ollama.mdx rename to docs/language-models/local-models/ollama.mdx diff --git a/docs/language-models/settings.mdx b/docs/language-models/settings.mdx new file mode 100644 index 0000000000..b4a4ffc72a --- /dev/null +++ b/docs/language-models/settings.mdx @@ -0,0 +1,4 @@ +--- +title: Settings +--- +Coming soon... \ No newline at end of file diff --git a/docs/language-models/usage.mdx b/docs/language-models/usage.mdx new file mode 100644 index 0000000000..62948f16bc --- /dev/null +++ b/docs/language-models/usage.mdx @@ -0,0 +1,5 @@ +--- +title: Usage +--- + +Coming soon... diff --git a/docs/mint.json b/docs/mint.json index 1b61959066..bf0e12649c 100644 --- a/docs/mint.json +++ b/docs/mint.json @@ -34,97 +34,76 @@ "pages": ["getting-started/introduction", "getting-started/setup"] }, { - "group": "Components", + "group": "Guides", "pages": [ - "components/introduction", - "components/core", - "components/language-model", - "components/computer" + "guides/basic-usage.mdx", + "guides/demos", + "guides/running-locally", + "guides/advanced-terminal-usage", + "guides/multiple-instances" ] }, { - "group": "Usage", + "group": "Settings", "pages": [ - "usage/examples", - { - "group": "Terminal", - "pages": [ - "usage/terminal/arguments", - "usage/terminal/settings", - "usage/terminal/conversation-history", - "usage/terminal/magic-commands", - "usage/terminal/budget-manager", - "usage/terminal/vision" - ] - }, - { - "group": "Python", - "pages": [ - "usage/python/arguments", - "usage/python/streaming-response", - "usage/python/multiple-instances", - "usage/python/settings", - "usage/python/conversation-history", - "usage/python/magic-commands", - "usage/python/budget-manager" - ] - }, - { - "group": "Desktop", - "pages": ["usage/desktop/install"] - } + "settings/all-settings", + "settings/profiles", + "settings/example-profiles" ] }, { "group": "Language Models", "pages": [ - "language-model-setup/introduction", + "language-models/introduction", { - "group": "Hosted Setup", + "group": "Hosted Providers", "pages": [ - "language-model-setup/hosted-models/openai", - "language-model-setup/hosted-models/azure", - "language-model-setup/hosted-models/vertex-ai", - "language-model-setup/hosted-models/replicate", - "language-model-setup/hosted-models/togetherai", - "language-model-setup/hosted-models/mistral-api", - "language-model-setup/hosted-models/anthropic", - "language-model-setup/hosted-models/anyscale", - "language-model-setup/hosted-models/aws-sagemaker", - "language-model-setup/hosted-models/baseten", - "language-model-setup/hosted-models/cloudflare", - "language-model-setup/hosted-models/cohere", - "language-model-setup/hosted-models/ai21", - "language-model-setup/hosted-models/deepinfra", - "language-model-setup/hosted-models/huggingface", - "language-model-setup/hosted-models/nlp-cloud", - "language-model-setup/hosted-models/openrouter", - "language-model-setup/hosted-models/palm", - "language-model-setup/hosted-models/perplexity", - "language-model-setup/hosted-models/petals", - "language-model-setup/hosted-models/vllm" + "language-models/hosted-models/openai", + "language-models/hosted-models/azure", + "language-models/hosted-models/vertex-ai", + "language-models/hosted-models/replicate", + "language-models/hosted-models/togetherai", + "language-models/hosted-models/mistral-api", + "language-models/hosted-models/anthropic", + "language-models/hosted-models/anyscale", + "language-models/hosted-models/aws-sagemaker", + "language-models/hosted-models/baseten", + "language-models/hosted-models/cloudflare", + "language-models/hosted-models/cohere", + "language-models/hosted-models/ai21", + "language-models/hosted-models/deepinfra", + "language-models/hosted-models/huggingface", + "language-models/hosted-models/nlp-cloud", + "language-models/hosted-models/openrouter", + "language-models/hosted-models/palm", + "language-models/hosted-models/perplexity", + "language-models/hosted-models/petals", + "language-models/hosted-models/vllm" ] }, { - "group": "Local Setup", + "group": "Local Providers", "pages": [ - "language-model-setup/local-models/lm-studio", - "language-model-setup/local-models/llamafile", - "language-model-setup/local-models/janai", - "language-model-setup/local-models/ollama", - "language-model-setup/local-models/custom-endpoint", - "language-model-setup/local-models/best-practices" + "language-models/local-models/lm-studio", + "language-models/local-models/llamafile", + "language-models/local-models/janai", + "language-models/local-models/ollama", + "language-models/local-models/custom-endpoint", + "language-models/local-models/best-practices" ] }, - "language-model-setup/custom-models" + "language-models/custom-models", + "language-models/settings", + "language-models/usage" ] }, + { + "group": "Protocols", + "pages": ["protocols/lmc-messages"] + }, { "group": "Integrations", - "pages": [ - "integrations/e2b", - "integrations/docker" - ] + "pages": ["integrations/e2b", "integrations/docker"] }, { "group": "Safety", @@ -137,9 +116,7 @@ }, { "group": "Protocols", - "pages": [ - "protocols/lmc-messages" - ] + "pages": ["protocols/lmc-messages"] }, { "group": "Telemetry", diff --git a/docs/safety/introduction.mdx b/docs/safety/introduction.mdx index d9342f4727..6a8d7b33f3 100644 --- a/docs/safety/introduction.mdx +++ b/docs/safety/introduction.mdx @@ -6,11 +6,11 @@ Safety is a top priority for us at Open Interpreter. Running LLM generated code # Safety Measures -- [Safe mode]("/safety/safe-mode.mdx") enables code scanning, as well as the ability to scan packages with (guarddog)[https://github.com/DataDog/guarddog] with a simple change to the system message. See the [safe mode docs](/safety/safe-mode.mdx) for more information. +- [Safe mode](/safety/safe-mode) enables code scanning, as well as the ability to scan packages with [guarddog](https://github.com/DataDog/guarddog) with a simple change to the system message. See the [safe mode docs](/safety/safe-mode) for more information. - Requiring confirmation with the user before the code is actually run. This is a simple measure that can prevent a lot of accidents. It exists as another layer of protection, but can be disabled with the `--auto-run` flag if you wish. -- Sandboxing code execution. Open Interpreter can be run in a sandboxed envirnoment using [Docker](/introduction/docker). This is a great way to run code without worrying about it affecting your system. Docker support is currently experimental, but we are working on making it a core feature of Open Interpreter. Another option for sandboxing is [E2B](https://e2b.dev/), which overrides the default python language with a sandboxed, hosted version of python through E2B. Follow [this guide](/introduction/e2b) to set it up. +- Sandboxing code execution. Open Interpreter can be run in a sandboxed envirnoment using [Docker](/integrations/docker). This is a great way to run code without worrying about it affecting your system. Docker support is currently experimental, but we are working on making it a core feature of Open Interpreter. Another option for sandboxing is [E2B](https://e2b.dev/), which overrides the default python language with a sandboxed, hosted version of python through E2B. Follow [this guide](/integrations/e2b) to set it up. ## Notice diff --git a/docs/settings/all-settings.mdx b/docs/settings/all-settings.mdx new file mode 100644 index 0000000000..9d86356148 --- /dev/null +++ b/docs/settings/all-settings.mdx @@ -0,0 +1,5 @@ +--- +title: All Settings +--- + +Coming soon... diff --git a/docs/settings/example-profiles.mdx b/docs/settings/example-profiles.mdx new file mode 100644 index 0000000000..f35cc82c84 --- /dev/null +++ b/docs/settings/example-profiles.mdx @@ -0,0 +1,10 @@ +--- +title: Example Profiles +--- + +### OS Mode + +```yaml +os: True +custom_instructions: "Always use Safari as the browser, and use Raycast instead of spotlight search by pressing option + space." +``` diff --git a/docs/settings/profiles.mdx b/docs/settings/profiles.mdx new file mode 100644 index 0000000000..ae2ac058c7 --- /dev/null +++ b/docs/settings/profiles.mdx @@ -0,0 +1,30 @@ +--- +title: Profiles +--- + +Profiles are preconfigured settings for Open Interpreter that make it easy to get going quickly with a specific set of settings. Any [setting](/settings/all-settings) can be configured in a profile. Custom instructions are helpful to have in each profile, to customize the behavior of Open Interpreter for the specific use case that the profile is designed for. + +To load a profile, run: + +```bash +interpreter --profile .yaml + +``` + +All profiles are stored in their own folder, which can be accessed by running: + +```bash +interpreter --profile + +``` + +To create your own profile, you can add a `.yaml` file to this folder and add whatever [settings](/settings/all-settings) you'd like: + +```yaml +custom_instructions: "Always use python, and be as concise as possible" +llm.model: gpt-4 +llm.temperature: 0.5 +# Any other settings you'd like to add +``` + +Profiles can be shared with others by sending them the profile yaml file!