Crates.io | ask_llm |
lib.rs | ask_llm |
version | 0.1.4 |
created_at | 2025-03-14 19:14:01.741681+00 |
updated_at | 2025-06-29 23:20:52.702105+00 |
description | make a request to whatever llm is the best these days, without hardcoding model/provider |
homepage | https://github.com/valeratrades/ask_llm |
repository | https://github.com/valeratrades/ask_llm |
max_upload_size | |
id | 1592623 |
size | 105,399 |
Layer for llm requests, generic over models and providers
Provides 2 simple primitives:
oneshot
and conversation
functions, which follow standard logic for llm interactions, that most providers share.
Then the model is automatically chosen based on whether we care about cost/speed/quality. Currently this is expressed by choosing Model::
{Fast
/Medium
/Slow
}, from which we pick a model as hardcoded in current implementation.
When used as a lib, import with
ask_llm = { version = "*", default-features = false }
as clap
would be brought otherwise, as it is necessary for cli
part to function.
Wraps the lib with clap. Uses oneshot
by default, if needing conversation
- read/write it from/to json files.
Note that due to specifics of implementation, minor version bumps can change effective behavior by changing what model processes the request. Only boundary API changes will be marked with major versions.