# CreateAnswerRequest ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **model** | **String** | ID of the model to use for completion. You can select one of `ada`, `babbage`, `curie`, or `davinci`. | **question** | **String** | Question to get answered. | **examples** | [**Vec>**](array.md) | List of (question, answer) pairs that will help steer the model towards the tone and answer format you'd like. We recommend adding 2 to 3 examples. | **examples_context** | **String** | A text snippet containing the contextual information used to generate the answers for the `examples` you provide. | **documents** | Option<**Vec**> | List of documents from which the answer for the input `question` should be derived. If this is an empty list, the question will be answered based on the question-answer examples. You should specify either `documents` or a `file`, but not both. | [optional] **file** | Option<**String**> | The ID of an uploaded file that contains documents to search over. See [upload file](/docs/api-reference/files/upload) for how to upload a file of the desired format and purpose. You should specify either `documents` or a `file`, but not both. | [optional] **search_model** | Option<**String**> | ID of the model to use for [Search](/docs/api-reference/searches/create). You can select one of `ada`, `babbage`, `curie`, or `davinci`. | [optional][default to ada] **max_rerank** | Option<**i32**> | The maximum number of documents to be ranked by [Search](/docs/api-reference/searches/create) when using `file`. Setting it to a higher value leads to improved accuracy but with increased latency and cost. | [optional][default to 200] **temperature** | Option<**f32**> | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | [optional][default to 0] **logprobs** | Option<**i32**> | Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. If you need more than this, please contact us through our [Help center](https://help.openai.com) and describe your use case. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs. | [optional] **max_tokens** | Option<**i32**> | The maximum number of tokens allowed for the generated answer | [optional][default to 16] **stop** | Option<[**crate::models::CreateAnswerRequestStop**](CreateAnswerRequest_stop.md)> | | [optional] **n** | Option<**i32**> | How many answers to generate for each question. | [optional][default to 1] **logit_bias** | Option<[**serde_json::Value**](.md)> | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass `{\"50256\": -100}` to prevent the <|endoftext|> token from being generated. | [optional] **return_metadata** | Option<**bool**> | A special boolean flag for showing metadata. If set to `true`, each document entry in the returned JSON will contain a \"metadata\" field. This flag only takes effect when `file` is set. | [optional][default to false] **return_prompt** | Option<**bool**> | If set to `true`, the returned JSON will include a \"prompt\" field containing the final prompt that was used to request a completion. This is mainly useful for debugging purposes. | [optional][default to false] **expand** | Option<[**Vec**](serde_json::Value.md)> | If an object name is in the list, we provide the full information of the object; otherwise, we only provide the object ID. Currently we support `completion` and `file` objects for expansion. | [optional][default to []] **user** | Option<**String**> | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). | [optional] [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)