Crates.io | ollama-rs-mangle-fork |
lib.rs | ollama-rs-mangle-fork |
version | 0.1.1 |
source | src |
created_at | 2023-11-21 19:14:07.436781 |
updated_at | 2023-11-21 19:14:07.436781 |
description | A Rust library for interacting with the Ollama API |
homepage | |
repository | https://github.com/manglemix/ollama-rs |
max_upload_size | |
id | 1044514 |
size | 74,726 |
It was made following the Ollama API documentation.
[dependencies]
ollama-rs = "0.1.1"
// By default it will connect to localhost:11434
let ollama = Ollama::default();
// For custom values:
let ollama = Ollama::new("http://localhost".to_string(), 11434);
Feel free to check the Chatbot example that shows how to use the library to create a simple chatbot in less than 50 lines of code.
These examples use poor error handling for simplicity, but you should handle errors properly in your code.
let model = "llama2:latest".to_string();
let prompt = "Why is the sky blue?".to_string();
let res = ollama.generate(GenerationRequest::new(model, prompt)).await;
if let Ok(res) = res {
println!("{}", res.response);
}
OUTPUTS: The sky appears blue because of a phenomenon called Rayleigh scattering...
Requires the stream
feature.
let model = "llama2:latest".to_string();
let prompt = "Why is the sky blue?".to_string();
let mut stream = ollama.generate_stream(GenerationRequest::new(model, prompt)).await.unwrap();
let mut stdout = tokio::io::stdout();
while let Some(res) = stream.next().await {
let res = res.unwrap();
stdout.write(res.response.as_bytes()).await.unwrap();
stdout.flush().await.unwrap();
}
Same output as above but streamed.
let res = ollama.list_local_models().await.unwrap();
Returns a vector of Model
structs.
let res = ollama.show_model_info("llama2:latest".to_string()).await.unwrap();
Returns a ModelInfo
struct.
let res = ollama.create_model("model".into(), "/tmp/Modelfile.example".into()).await.unwrap();
Returns a CreateModelStatus
struct representing the final status of the model creation.
Requires the stream
feature.
let mut res = ollama.create_model_stream("model".into(), "/tmp/Modelfile.example".into()).await.unwrap();
while let Some(res) = res.next().await {
let res = res.unwrap();
// Handle the status
}
Returns a CreateModelStatusStream
that will stream every status update of the model creation.
let _ = ollama.copy_model("mario".into(), "mario_copy".into()).await.unwrap();
ollama.delete_model("mario_copy".into()).await.unwrap();
let prompt = "Why is the sky blue?".to_string();
let res = ollama.generate_embeddings("llama2:latest".to_string(), prompt, None).await.unwrap();
Returns a GenerateEmbeddingsResponse
struct containing the embeddings (a vector of floats).