| Crates.io | vibeland-conf |
| lib.rs | vibeland-conf |
| version | 0.1.0 |
| created_at | 2025-07-31 15:22:57.743568+00 |
| updated_at | 2025-07-31 15:22:57.743568+00 |
| description | Shared configuration and LLM client harness for Vibeland project |
| homepage | |
| repository | https://github.com/your-org/vibeland-conf |
| max_upload_size | |
| id | 1775220 |
| size | 82,400 |
A thin wrapper to instantiate a client to be used in Vibeland.
Mode ollama:
export VIBELAND_LLM_MODE="ollama"
export VIBELAND_OLLAMA_MODEL="llama3.1"
export VIBELAND_OLLAMA_URL="http://localhost:11434" # optional, defaults to this
export VIBELAND_OLLAMA_MAX_TOKENS="1024" # optional
export VIBELAND_OLLAMA_TEMPERATURE="0.7" # optional
export VIBELAND_LLM_MODE="remote"
export VIBELAND_REMOTE_PROVIDER="openai"
export VIBELAND_REMOTE_MODEL="gpt-4"
export VIBELAND_REMOTE_API_KEY="sk-your-api-key-here"
export VIBELAND_REMOTE_MAX_TOKENS="1024" # optional
export VIBELAND_REMOTE_TEMPERATURE="0.7" # optional
Mode remote:
# Run all tests
cargo test
# Run with specific features
cargo test --features="ollama"
cargo test --features="openai"
cargo test --features="ollama,openai"
# Run integration tests only
cargo test --test integration_tests
# Run with verbose output
cargo test -- --nocapture