| Crates.io | plainllm |
| lib.rs | plainllm |
| version | 1.2.0 |
| created_at | 2025-05-29 17:20:40.466223+00 |
| updated_at | 2025-06-03 10:43:22.206811+00 |
| description | A plain & simple LLM client |
| homepage | |
| repository | |
| max_upload_size | |
| id | 1694050 |
| size | 147,291 |
This crate offers a small client for interacting with a language model API. It focuses on three operations:
ask – single question returning plain text.call_llm – general chat interface returning an LLMResponse and supporting streaming and tool calls.call_llm_structured – like call_llm but automatically parses the answer into your own type.All methods return Result<T, plainllm::Error> where the error describes HTTP or JSON failures and issues executing tools.
use plainllm::client::{PlainLLM, Message, LLMOptions};
#[tokio::main]
async fn main() {
let llm = PlainLLM::new("http://127.0.0.1:1234", "token");
let answer = llm.ask("model", "Hello?", &LLMOptions::new()).await.unwrap();
println!("{}", answer);
}
call_llm takes a model name, a list of messages and optional LLMOptions.
It returns an LLMResponse with the full data from the server.
use plainllm::client::{PlainLLM, Message, LLMOptions};
let llm = PlainLLM::new("http://127.0.0.1:1234", "token");
let messages = vec![Message::new("user", "hi")];
let opts = LLMOptions::new()
.streaming(true)
.temperature(0.7)
.max_tokens(100);
let resp = llm.call_llm("model", messages, &opts).await?;
LLMOptions lets you enable streaming and tweak parameters like
temperature, top_p or max_tokens. You can also attach event handlers such as
on_token, on_start_thinking, on_stop_thinking, on_thinking,
on_tool_call and on_tool_result.
call_llm_structured requires a type that implements Deserialize and
JsonSchema. The response is parsed directly into that type.
use serde::{Deserialize, Serialize};
use schemars::JsonSchema;
use plainllm::client::{PlainLLM, Message, LLMOptions};
#[derive(Serialize, Deserialize, JsonSchema, Debug)]
struct Answer { value: String }
let llm = PlainLLM::new("http://127.0.0.1:1234", "token");
let messages = vec![Message::new("user", "give json")];
let res: Answer = llm.call_llm_structured("model", messages, &LLMOptions::new()).await?;
Functions can be exposed to the model through a ToolRegistry.
Tools are built with FunctionTool::from_type or FunctionToolBuilder and
registered on the registry. call_llm will automatically execute tool
requests when a registry is provided in LLMOptions.
use serde::{Deserialize, Serialize};
use schemars::JsonSchema;
use plainllm::client::{PlainLLM, Message, ToolRegistry, FunctionTool, LLMOptions};
#[derive(Serialize, Deserialize, JsonSchema)]
struct EchoArgs { text: String }
async fn echo(args: EchoArgs) -> String { args.text }
let mut registry = ToolRegistry::new();
registry.register(FunctionTool::from_type::<EchoArgs>("echo", "Echo"), echo);
let llm = PlainLLM::new("http://127.0.0.1:1234", "token");
let messages = vec![Message::new("user", "say hello")];
let opts = LLMOptions::new().with_tools(®istry);
let resp = llm.call_llm("model", messages, &opts).await?;
MIT 2025 - Victor Bjelkholm