Skip to content

Ollama support #1028

@BodhiHu

Description

@BodhiHu

Is your feature request related to a problem? Please describe.

Hello, I tried ollama on my macbook and got pretty good performance compared to running LocalAI with llama-stable directly(which consumes lots of CPU and not using GPU at all):

Screenshot 2023-08-24 at 18 07 54

While Ollama will use the GPU and so saves CPU, but unfortunately ollama did not have OpenAI like API.

Describe the solution you'd like
Add support for ollama.

Describe alternatives you've considered

Had not found one proper.

Additional context

Thanks a lot :D

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions