Crates.io | llm-rs |
lib.rs | llm-rs |
version | 0.1.1 |
source | src |
created_at | 2023-04-25 01:28:40.869175 |
updated_at | 2023-05-31 08:37:25.721957 |
description | A library, with a command line interface, to exploit Large Language Models |
homepage | |
repository | https://github.com/worikgh/llm-rs/ |
max_upload_size | |
id | 848098 |
size | 3,339,052 |
This is only interacting with OpenAI's language models
cli
There is a library that exposes the various endpoints and a command line binary (cli
) to use it
To use: cargo run --bin cli -- --help
Command line argument definitions
Usage: cli [OPTIONS]
Options:
-m, --model <MODEL> The model to use [default: text-davinci-003]
-t, --max-tokens <MAX_TOKENS> Maximum tokens to return [default: 2000]
-T, --temperature <TEMPERATURE> Temperature for the model [default: 0.9]
--api-key <API_KEY> The secret key. [Default: environment variable `OPENAI_API_KEY`]
-d, --mode <MODE> The initial mode (API endpoint) [default: completions]
-r, --record-file <RECORD_FILE> The file name that prompts and replies are recorded in [default: reply.txt]
-p, --system-prompt <SYSTEM_PROMPT> The system prompt sent to the chat model
-h, --help Print help
-V, --version Print version
When the programme is running, enter prompts at the ">".
Generally text entered is sent to the LLM.
Text that starts with "! " is a command to the system.
There is a cli to flex the API.
Meta commands that effect the performance of the programme are prefixed with a !
character, and are:
Command | Result |
---|---|
! p | Display settings |
! md | Display all models available available |
! ms | |
! ml | List modes |
! v | Set verbosity |
! k | Set max tokens for completions |
! t | Set temperature for completions |
! sp | Set system prompt (after ! cc |
! ci | Clear image mask |
! a | |
! ci | Clear the image stored for editing |
! f | List the files stored on the server |
! fu | |
! fd | |
! fi | |
! fc | |
! fl | path with name for use in prompts like: {name} |
! dx | Display context (for chat) |
! cx | Clear context |
! sx | |
! rx | |
! ? | This text |
C-q or C-c to quit.
! sx <path>
, ! rx <path>
Does not save the system prompt, yet.! fl <name> <path>
Then "Summarise {name}"The LLMs can be used in different modes. Each mode corresponds to an API endpoint.
The meaning of the prompts change with the mode.
role
set to "system". It defines the characteristics of the machine. Some examples:
Generate or edit images based on a prompt.
Enter Image mode with the meta command: ! m image [image to edit]
. If you provide an image to edit "ImageEdit" mode is entered instead, and the supplied image is edited.
If an image is not supplied (at ! m image
prompt) the user enters a prompt and an image is generated by OpenAI based n that prompt. It is stored for image edit. Generating a new image over writes the old one.
Mask To edit an image the process works best if a mask is supplied. This is a 1024x1024 PNG image with a transparent region. The editing will happen in the transparent region. There are two ways to supply a mask: when entering image edit, or with a meta command
! m image_edit path_to/mask.png
mask
Meta Command The mask can be set or changed at any time using the meta command: ! mask path/to_mask.png
If no mask is supplied a 1024x1024 transparent PNG file is created and used.