Skip to content

ollama-lab/ollama-rest-rs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ollama-rest.rs

Asynchronous Rust bindings of Ollama REST API, using reqwest, tokio, serde, and chrono.

Install

Features

name status
Completion Supported ✅
Embedding Supported ✅
Model creation Supported ✅
Model deletion Supported ✅
Model pulling Supported ✅
Model copying Supported ✅
Local models Supported ✅
Running models Supported ✅
Model pushing Experimental 🧪
Tools Experimental 🧪

At a glance

See source of this example.

use std::io::Write;

use ollama_rest::{models::generate::{GenerationRequest, GenerationResponse}, Ollama};
use serde_json::json;

// By default checking Ollama at 127.0.0.1:11434
let ollama = Ollama::default();

let request = serde_json::from_value::<GenerationRequest>(json!({
    "model": "llama3.2:1b",
    "prompt": "Why is the sky blue?",
})).unwrap();

let mut stream = ollama.generate_streamed(&request).await.unwrap();

while let Some(Ok(res)) = stream.next().await {
    if !res.done {
        print!("{}", res.response);
        // Flush stdout for each word to allow realtime output
        std::io::stdout().flush().unwrap();
    }
}

println!();

Or, make your own chatbot interface! See this example (CLI) and this example (REST API).

About

Asynchronous Rust bindings for Ollama REST API

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages