# AIO Configuration File README Welcome to the configuration file README for our project! This README provides an overview of the configuration file structure and its purpose. The configuration file is written in YAML and is designed to fine-tune the behavior of the engines for specific prompts. ## Table of Contents - [AIO Configuration File README](#aio-configuration-file-readme) - [Table of Contents](#table-of-contents) - [Introduction](#introduction) - [Usage](#usage) - [OpenAI](#openai) - [Example](#example) - [Sample Prompts](#sample-prompts) ## Introduction The configuration file allows you to define prompts and their associated settings for interactions with the AI API (Currently only OpenAI GPT-3.5 Turbo model is available). Each engine have a sets of prompts, and each prompt has a set of parameters that can be adjusted to control the output generated by the model. This document will guide you through setting up your prompts and using the configuration file effectively. By default, `aio` will try to read the configuration file from `~/.config/aio/config.yaml`. You can also specify the path to the configuration file using the `--config-path` argument. For example: `aio --config-path ./config.yaml`. ## Usage ### OpenAI To use the configuration file effectively, follow these steps: 1. **Defining Prompts**: In the configuration file, you can define different prompts under the `openai.prompts` section. 1. **Name**: The name of the prompt. The name will be used to identify the prompt you select in the `-e|--engine` argument. 2. **Messages**: The whole prompt consists of several messages of three types: - "system" messages provide context and instructions to the model, while - "user" messages define the user's input or query. - "assisstant" messages are used to mocking the AI response. Use the variable `$input` to represent the input from the command line. 3. **Parameters**: Parameters such as temperature, top-p, penalties, and max tokens can be adjusted to control the output generated by the model. You can setup `max_tokens`, `temperature`, `top_p`, `presence_penalty`, `frequency_penalty`, `best_of`, `n`, `stop`. Refer to the Documentation of the [*chat create* OpenAI API](https://platform.openai.com/docs/api-reference/chat/create) for more information about the parameters. 4. **Models**: You can optionally select the model you want to use for the prompt. You have to choose a model compatible with OpenAI chat completion. [You can find the list of those model here](https://platform.openai.com/docs/models/model-endpoint-compatibility). By default, the model used is `gpt-3.5-turbo`. ## Example Here's a snippet of the configuration file structure: ```yaml openai: prompts: - name: command model: gpt-3.5-turbo # optional messages: # System message for context - role: system content: In markdown, write the command... # User message - role: user content: $input parameters: # Parameters to control model behavior. Each parameter is optional temperature: 0 top-p: 1.0 frequency-penalty: 0.2 presence-penalty: 0 max-tokens: 200 ``` ## Sample Prompts Here are examples of prompt definitions within [the sample configuration file](../config.yml) you can find in the repository: 1. **Command Prompt**: - System Message: Provides instructions for formatting a command. - User Message: Represents user input for the command. - Parameters: Parameters controlling the response characteristics. 2. **Ask Prompt**: - System Message: Provides a brief introduction to ChatGPT. - User Message: Represents user input for the query. - Parameters: Parameters influencing the model's response. --- *Note: This README is a general guide for understanding and using the configuration file. Feel free to customize it according to your desire.*