| Crates.io | memvid-ask-model |
| lib.rs | memvid-ask-model |
| version | 2.0.135 |
| created_at | 2025-11-18 17:16:30.695752+00 |
| updated_at | 2026-01-25 19:18:28.509748+00 |
| description | LLM inference module for Memvid Q&A with local and cloud model support |
| homepage | |
| repository | https://github.com/memvid/memvid |
| max_upload_size | |
| id | 1938715 |
| size | 226,926 |
LLM inference module for Memvid Q&A with local and cloud model support.
memvid-ask-model provides LLM inference capabilities for Memvid's Q&A functionality. It supports both local inference using llama.cpp and cloud APIs (OpenAI, Claude, Gemini).
[dependencies]
memvid-ask-model = "2.0.102"
use memvid_ask_model::run_model_inference;
use memvid_core::Memvid;
// Get search results from memvid-core
let mem = Memvid::open("knowledge.mv2")?;
let hits = mem.find("topic", 5)?;
// Run inference with local model
let answer = run_model_inference(
"What is this about?",
&hits,
None, // Use local model
)?;
// Or use cloud API
let answer = run_model_inference(
"Summarize the findings",
&hits,
Some("openai"), // Requires OPENAI_API_KEY
)?;
For cloud models, set the appropriate API key:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GEMINI_API_KEY=...
Licensed under Apache 2.0