plainllm

Crates.ioplainllm
lib.rsplainllm
version1.2.0
created_at2025-05-29 17:20:40.466223+00
updated_at2025-06-03 10:43:22.206811+00
descriptionA plain & simple LLM client
homepage
repository
max_upload_size
id1694050
size147,291
Victor Bjelkholm (victorb)

documentation

README

PlainLLM

This crate offers a small client for interacting with a language model API. It focuses on three operations:

  1. ask – single question returning plain text.
  2. call_llm – general chat interface returning an LLMResponse and supporting streaming and tool calls.
  3. call_llm_structured – like call_llm but automatically parses the answer into your own type.

All methods return Result<T, plainllm::Error> where the error describes HTTP or JSON failures and issues executing tools.

Basic usage

use plainllm::client::{PlainLLM, Message, LLMOptions};

#[tokio::main]
async fn main() {
    let llm = PlainLLM::new("http://127.0.0.1:1234", "token");
    let answer = llm.ask("model", "Hello?", &LLMOptions::new()).await.unwrap();
    println!("{}", answer);
}

General calls

call_llm takes a model name, a list of messages and optional LLMOptions. It returns an LLMResponse with the full data from the server.

use plainllm::client::{PlainLLM, Message, LLMOptions};

let llm = PlainLLM::new("http://127.0.0.1:1234", "token");
let messages = vec![Message::new("user", "hi")];
let opts = LLMOptions::new()
    .streaming(true)
    .temperature(0.7)
    .max_tokens(100);
let resp = llm.call_llm("model", messages, &opts).await?;

LLMOptions lets you enable streaming and tweak parameters like temperature, top_p or max_tokens. You can also attach event handlers such as on_token, on_start_thinking, on_stop_thinking, on_thinking, on_tool_call and on_tool_result.

Structured output

call_llm_structured requires a type that implements Deserialize and JsonSchema. The response is parsed directly into that type.

use serde::{Deserialize, Serialize};
use schemars::JsonSchema;
use plainllm::client::{PlainLLM, Message, LLMOptions};

#[derive(Serialize, Deserialize, JsonSchema, Debug)]
struct Answer { value: String }

let llm = PlainLLM::new("http://127.0.0.1:1234", "token");
let messages = vec![Message::new("user", "give json")];
let res: Answer = llm.call_llm_structured("model", messages, &LLMOptions::new()).await?;

Tools

Functions can be exposed to the model through a ToolRegistry. Tools are built with FunctionTool::from_type or FunctionToolBuilder and registered on the registry. call_llm will automatically execute tool requests when a registry is provided in LLMOptions.

use serde::{Deserialize, Serialize};
use schemars::JsonSchema;
use plainllm::client::{PlainLLM, Message, ToolRegistry, FunctionTool, LLMOptions};

#[derive(Serialize, Deserialize, JsonSchema)]
struct EchoArgs { text: String }

async fn echo(args: EchoArgs) -> String { args.text }

let mut registry = ToolRegistry::new();
registry.register(FunctionTool::from_type::<EchoArgs>("echo", "Echo"), echo);

let llm = PlainLLM::new("http://127.0.0.1:1234", "token");
let messages = vec![Message::new("user", "say hello")];
let opts = LLMOptions::new().with_tools(&registry);
let resp = llm.call_llm("model", messages, &opts).await?;

License

MIT 2025 - Victor Bjelkholm

Commit count: 0

cargo fmt