Crates.io | gpto |
lib.rs | gpto |
version | 0.2.1 |
source | src |
created_at | 2022-12-18 17:57:07.146484 |
updated_at | 2024-04-07 20:11:05.812219 |
description | A tiny unofficial OpenAI client |
homepage | https://github.com/alanvardy/gpto |
repository | https://github.com/alanvardy/gpto |
max_upload_size | |
id | 740524 |
size | 90,286 |
An Unofficial OpenAI Terminal Client
> gpto -h
A tiny unofficial OpenAI client
Usage: gpto [OPTIONS] <COMMAND>
Commands:
prompt The prompt(s) to generate completions for. Also accepts text from stdin
conversation
help Print this message or the help of the given subcommand(s)
Options:
-d, --disable-spinner Disable the spinner and message when querying
-s, --suffix <SUFFIX> Text to be appended to end of response [default: ]
-m, --model <MODEL> Text to be appended to end of response, defaults to gpt-3.5-turbo and can be set in config
-c, --config <CONFIG> Absolute path of configuration. Defaults to $XDG_CONFIG_HOME/gpto.cfg
-e, --endpoint <ENDPOINT> URL to be queried, defaults to https://api.openai.com and can be set in config
-t, --temperature <TEMPERATURE> What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer [default: 1]
-n, --number <NUMBER> How many completions to generate for each prompt [default: 1]
-a, --max-tokens <MAX_TOKENS> Maximum number of tokens to use for each request [default: 1000]
-i, --timeout <TIMEOUT> Maximum length of time in seconds to wait for an API request to complete
-o, --top-p <TOP_P> An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both [default: 1]
-h, --help Print help
-V, --version Print version
Learn more about how to use text completion
# Linux and MacOS
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Install GTPO
cargo install gtpo
# Use yay or another AUR helper
yay gpto-bin
Clone the project
git clone git@github.com:alanvardy/gpto.git
cd gpto
./test.sh # run the tests
cargo build --release
You can then find the binary in /target/release/
Get a completion with default parameters
> gpto prompt --text "tell me a joke"
Q: What did the fish say when it hit the wall?
A: Dam!
Get completions using text from stdin (without displaying the spinner)
> echo "what is one plus one" | gpto prompt -d
Two
Get a completion with a different model (this example uses the leading code completion model). And yes, the generated code is not idiomatic!
Read more about models here. This CLI app uses the /v1/chat/completions
endpoint.
> gpto -m gpt-4 prompt -t language is elixir\nwrite a function that raises an error if the argument is not an integer and multiplies it by 2 if it is an integer
def multiply_by_two(x)
raise ArgumentError, "Argument is not an integer" unless x.is_a? Integer
x * 2
end
Timeout is 30s by default, this can be altered by changing the timeout
option in gpto.cfg