Crates.io | chat-prompts |
lib.rs | chat-prompts |
version | 0.18.3 |
source | src |
created_at | 2023-10-23 07:02:27.870815 |
updated_at | 2024-12-11 06:16:32.043395 |
description | Chat prompt template |
homepage | |
repository | https://github.com/LlamaEdge/LlamaEdge |
max_upload_size | |
id | 1011016 |
size | 229,415 |
chat-prompts
is part of LlamaEdge API Server project. It provides a collection of prompt templates that are used to generate prompts for the LLMs (See models in huggingface.co/second-state).
The available prompt templates are listed below:
baichuan-2
Prompt string
以下内容为人类用户与与一位智能助手的对话。
用户:你好!
助手:
Example: second-state/Baichuan2-13B-Chat-GGUF
codellama-instruct
Prompt string
<s>[INST] <<SYS>>
Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: <</SYS>>
{prompt} [/INST]
codellama-super-instruct
Prompt string
<s>Source: system\n\n {system_prompt} <step> Source: user\n\n {user_message_1} <step> Source: assistant\n\n {ai_message_1} <step> Source: user\n\n {user_message_2} <step> Source: assistant\nDestination: user\n\n
chatml
Prompt string
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Example: second-state/Yi-34B-Chat-GGUF
chatml-tool
Prompt string
<|im_start|>system\n{system_message} Here are the available tools: <tools> [{tool_1}, {tool_2}] </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:\n<tool_call>\n{"arguments": <args-dict>, "name": <function-name>}\n</tool_call><|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
Example
<|im_start|>system\nYou are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> [{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"format":{"type":"string","description":"The temperature unit to use. Infer this from the users location.","enum":["celsius","fahrenheit"]}},"required":["location","format"]}}},{"type":"function","function":{"name":"predict_weather","description":"Predict the weather in 24 hours","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"format":{"type":"string","description":"The temperature unit to use. Infer this from the users location.","enum":["celsius","fahrenheit"]}},"required":["location","format"]}}}] </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:\n<tool_call>\n{"arguments": <args-dict>, "name": <function-name>}\n</tool_call><|im_end|>
<|im_start|>user
Hey! What is the weather like in Beijing?<|im_end|>
<|im_start|>assistant
deepseek-chat
Prompt string
User: {user_message_1}
Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
Assistant:
deepseek-chat-2
Prompt string
<|begin_of_sentence|>{system_message}
User: {user_message_1}
Assistant: {assistant_message_1}<|end_of_sentence|>User: {user_message_2}
Assistant:
deepseek-chat-25
Prompt string
<|begin_of_sentence|>{system_message}<|User|>{user_message_1}<|Assistant|>{assistant_message_1}<|end_of_sentence|><|User|>{user_message_2}<|Assistant|>
deepseek-coder
Prompt string
{system}
### Instruction:
{question_1}
### Response:
{answer_1}
<|EOT|>
### Instruction:
{question_2}
### Response:
embedding
Prompt string This prompt template is only used for embedding models. It works as a placeholder, therefore, it has no concrete prompt string.
functionary-31
Prompt string
<|start_header_id|>system<|end_header_id|>
Environment: ipython
Cutting Knowledge Date: December 2023
You have access to the following functions:
Use the function 'get_current_weather' to 'Get the current weather'
{"name":"get_current_weather","description":"Get the current weather","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"}},"required":["location"]}}
Think very carefully before calling functions.
If a you choose to call a function ONLY reply in the following format:
<{start_tag}={function_name}>{parameters}{end_tag}
where
start_tag => `<function`
parameters => a JSON dict with the function argument name as key and function argument value as value.
end_tag => `</function>`
Here is an example,
<function=example_function_name>{"example_name": "example_value"}</function>
Reminder:
- If looking for real time information use relevant functions before falling back to brave_search
- Function calls MUST follow the specified format, start with <function= and end with </function>
- Required parameters MUST be specified
- Only call one function at a time
- Put the entire function call reply on one line
<|eot_id|><|start_header_id|>user<|end_header_id|>
What is the weather like in Beijing today?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
functionary-32
Prompt string
<|start_header_id|>system<|end_header_id|>
You are capable of executing available function(s) if required.
Only execute function(s) when absolutely necessary.
Ask for the required input to:recipient==all
Use JSON for function arguments.
Respond in this format:
>>>${recipient}
${content}
Available functions:
// Supported function definitions that should be called when necessary.
namespace functions {
// Get the current weather
type get_current_weather = (_: {
// The city and state, e.g. San Francisco, CA
location: string,
}) => any;
} // namespace functions<|eot_id|><|start_header_id|>user<|end_header_id|>
What is the weather like in Beijing today?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
gemma-instruct
Prompt string
<bos><start_of_turn>user
{user_message}<end_of_turn>
<start_of_turn>model
{model_message}<end_of_turn>model
Example: second-state/gemma-2-27b-it-GGUF
glm-4-chat
Prompt string
[gMASK]<|system|>
{system_message}<|user|>
{user_message_1}<|assistant|>
{assistant_message_1}
Example: second-state/glm-4-9b-chat-GGUF
human-assistant
Prompt string
Human: {input_1}\n\nAssistant:{output_1}Human: {input_2}\n\nAssistant:
intel-neural
Prompt string
### System:
{system}
### User:
{usr}
### Assistant:
llama-2-chat
Prompt string
<s>[INST] <<SYS>>
{system_message}
<</SYS>>
{user_message_1} [/INST] {assistant_message} </s><s>[INST] {user_message_2} [/INST]
llama-3-chat
Prompt string
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
mediatek-breeze
Prompt string
<s>{system_message} [INST] {user_message_1} [/INST] {assistant_message_1} [INST] {user_message_2} [/INST]
mistral-instruct
Prompt string
<s>[INST] {user_message_1} [/INST]{assistant_message_1}</s>[INST] {user_message_2} [/INST]{assistant_message_2}</s>
mistrallite
Prompt string
<|prompter|>{user_message}</s><|assistant|>{assistant_message}</s>
Example: second-state/MistralLite-7B-GGUF
mistral-tool
Prompt string
[INST] {user_message_1} [/INST][TOOL_CALLS] [{tool_call_1}]</s>[TOOL_RESULTS]{tool_result_1}[/TOOL_RESULTS]{assistant_message_1}</s>[AVAILABLE_TOOLS] [{tool_1},{tool_2}][/AVAILABLE_TOOLS][INST] {user_message_2} [/INST]
Example
[INST] Hey! What is the weather like in Beijing and Tokyo? [/INST][TOOL_CALLS] [{"name":"get_current_weather","arguments":{"location": "Beijing, CN", "format": "celsius"}}]</s>[TOOL_RESULTS]Fine, with a chance of showers.[/TOOL_RESULTS]Today in Auckland, the weather is expected to be partly cloudy with a high chance of showers. Be prepared for possible rain and carry an umbrella if you're venturing outside. Have a great day!</s>[AVAILABLE_TOOLS] [{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}}},{"type":"function","function":{"name":"predict_weather","description":"Predict the weather in 24 hours","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}}}][/AVAILABLE_TOOLS][INST] What is the weather like in Beijing now?[/INST]
nemotron-chat
<extra_id_0>System
{system_message}
<extra_id_1>User
{user_message_1}<extra_id_1>Assistant
{assistant_message_1}
<extra_id_1>User
{user_message_2}<extra_id_1>Assistant
{assistant_message_2}
<extra_id_1>User
{user_message_3}
<extra_id_1>Assistant\n
nemotron-tool
<extra_id_0>System
{system_message}
<tool> {tool_1} </tool>
<tool> {tool_2} </tool>
<extra_id_1>User
{user_message_1}<extra_id_1>Assistant
<toolcall> {tool_call_message} </toolcall>
<extra_id_1>Tool
{tool_result_message}
<extra_id_1>Assistant\n
octopus
Prompt string
{system_prompt}\n\nQuery: {input_text} \n\nResponse:
Example: second-state/Octopus-v2-GGUF
openchat
Prompt string
GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
Example: second-state/OpenChat-3.5-0106-GGUF
phi-2-instruct
Prompt string
Instruct: <prompt>\nOutput:
Example: second-state/phi-2-GGUF
phi-3-chat
Prompt string
<|system|>
{system_message}<|end|>
<|user|>
{user_message_1}<|end|>
<|assistant|>
{assistant_message_1}<|end|>
<|user|>
{user_message_2}<|end|>
<|assistant|>
solar-instruct
Prompt string
<s> ### User:
{user_message}
\### Assistant:
{assistant_message}</s>
stablelm-zephyr
Prompt string
<|user|>
{prompt}<|endoftext|>
<|assistant|>
vicuna-1.0-chat
Prompt string
{system} USER: {prompt} ASSISTANT:
vicuna-1.1-chat
Prompt string
USER: {prompt}
ASSISTANT:
vicuna-llava
Prompt string
<system_prompt>\nUSER:<image_embeddings>\n<textual_prompt>\nASSISTANT:
wizard-coder
Prompt string
{system}
### Instruction:
{instruction}
### Response:
zephyr
Prompt string
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
Example: second-state/Zephyr-7B-Beta-GGUF