Crates.io | llmvm-outsource |
lib.rs | llmvm-outsource |
version | 1.3.0 |
source | src |
created_at | 2023-08-09 18:06:04.248336 |
updated_at | 2024-06-23 23:29:55.551003 |
description | An llmvm backend which sends text and chat generation requests to known hosted language model providers. |
homepage | |
repository | https://github.com/djandries/llmvm |
max_upload_size | |
id | 940235 |
size | 59,427 |
An llmvm backend which sends text and chat generation requests to known hosted language model providers.
Supported providers:
Example of an llmvm model ID for this backend: outsource/openai-chat/gpt-3.5-turbo
Install this backend using cargo
.
cargo install llmvm-outsource
The backend can either be invoked directly, via llmvm-core or via a frontend that utilizes llmvm-core.
To invoke directly, execute llmvm-outsource -h
for details.
llmvm-outsource http
can be invoked to create a HTTP server for remote clients.
Run the backend executable to generate a configuration file at:
~/.config/llmvm/outsource.toml
.~/Library/Application Support/com.djandries.llmvm/outsource.toml
AppData\Roaming\djandries\llmvm\config\outsource.toml
Key | Required? | Description |
---|---|---|
openai_api_key |
If using OpenAI | API key for OpenAI requests. |
huggingface_api_key |
If using Hugging Face | API key for Hugging Face requests. |
anthropic_api_key |
If using Anthropic | API key for Anthropic requests. |
ollama_endpoint |
If using Ollama | Endpoint for Ollama requests (defaults to http://127.0.0.1:11434/api/generate ) |
openai_endpoint |
If using a custom OpenAI server | Custom endpoint for all OpenAI requests. Supports any OpenAI API compatible server (i.e. fastchat). |
tracing_directive |
No | Logging directive/level for tracing |
stdio_server |
No | Stdio server settings. See llmvm-protocol for details. |
http_server |
No | HTTP server settings. See llmvm-protocol for details. |
OpenAI custom endpoints may be specified via the config file (recommended, see above), or via the model ID at runtime. HuggingFace custom endpoints may only be specified via the model ID.
Within the model ID, custom hosted endpoints may be specified by supplying the prefix endpoint=
, followed by the endpoint
URL.
For OpenAI, add the endpoint after the model name. For example, the model ID could be outsource/openai-chat/vicuna-7b-v1.5/endpoint=https://localhost:8000
.
For HuggingFace, replace the model name with the endpoint. For example, the model ID could be outsource/huggingface-text/endpoint=https://yourendpointhere
.