llmterm

Crates.iollmterm
lib.rsllmterm
version
sourcesrc
created_at2025-03-10 03:02:07.430483+00
updated_at2025-03-10 03:40:31.362434+00
descriptionYour friendly local LLM terminal companion
homepage
repositoryhttps://github.com/timschmidt/llmterm
max_upload_size
id1586098
Cargo.toml error:TOML parse error at line 18, column 1 | 18 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
Timothy Schmidt (timschmidt)

documentation

README

llmterm offers suggestions based on your shell usage

llmterm screenshot

cuda

The kalosm crate is used to interface with the LLM. Cuda support is enabled by default. To choose other inference methods, edit the Cargo.toml file and rebuild.

to build

cargo build --release

to run

cargo run --release

to install

cargo install llmterm

models

cargo run --release -- --model llama_3_1_8b_chat
  • llama_3_1_8b_chat
  • mistral_7b_instruct_2
  • phi_3_5_mini_4k_instruct

to exit

Use Ctrl-C, or type exit or quit.

todo

easy on ramp

  • command line switch for different shells
  • command line switch to suggest only a command
  • check if llm response is empty, if so, pop the last activity off the buffer and try again
  • allow loading of all local models supported by kalosm by name

more challenging

Commit count: 0

cargo fmt