Crates.io | call-agent |
lib.rs | call-agent |
version | |
source | src |
created_at | 2025-02-07 09:14:45.255148+00 |
updated_at | 2025-02-20 03:50:56.865143+00 |
description | A multimodal chat API library with tool support, OpenAI API compatible |
homepage | |
repository | https://github.com/371tti/call-agent |
max_upload_size | |
id | 1546679 |
Cargo.toml error: | TOML parse error at line 17, column 1 | 17 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
Call Agent is a library implemented in Rust for handling call-related functions and integrating with AI. This library supports processing user prompts, executing function calls, defining and executing tools, and interacting via APIs.
Add the following to your
Cargo.toml
to use it as a dependency:
[dependencies]
call-agent = "1.0.0"
// create a new OpenAI client
let mut client = OpenAIClient::new(
"https://api.openai.com/v1/",
Some("YOUR_API_KEY"),
);
// register the custom tool
client.def_tool(Arc::new(TextLengthTool::new()));
// create a model configuration
let config = ModelConfig {
model: "gpt-4o-mini".to_string(),
strict: None,
max_completion_tokens: Some(1000),
temperature: Some(0.8),
top_p: Some(1.0),
parallel_tool_calls: None,
presence_penalty: Some(0.0),
model_name: None,
reasoning_effort: None,
};
// set the model configuration
client.set_model_config(&config);
client.rs
new(end_point: &str, api_key: Option<&str>)
→ Creates a new OpenAIClient
. Normalizes the endpoint and sets the API key.
def_tool<T: Tool + Send + Sync + 'static>(tool: Arc<T>)
→ Registers a tool. Overwrites if a tool with the same name exists.
list_tools()
→ Returns a list of registered tools as tuples (tool name, description, enabled status).
switch_tool(tool_name: &str, t_enable: bool)
→ Enables or disables the specified tool.
export_tool_def()
→ Returns a list of function definitions (FunctionDef
) for enabled tools.
send(model: &ModelConfig, prompt: &Vec<Message>)
→ Makes an API request with the specified model and returns the response.
send_can_use_tool(model: &ModelConfig, prompt: &Vec<Message>)
→ Makes an API request using the "auto" tool call specification.
send_with_tool(model: &ModelConfig, prompt: &Vec<Message>, tool_name: &str)
→ Makes an API request forcing the use of a specific tool.
call_api(...)
→ Internal method that sends a request to the endpoint and returns an APIResult
. Serializes header information and the response body.
create_prompt()
→ Generates an OpenAIClientState
for prompt management.
Below is an example of usage in main.rs
.
It receives user input, adds it to the prompt, and generates AI responses and tool actions in a chain.
// ...existing code in main.rs...
// Add user input and image message to the prompt
let prompt = vec![Message::User {
content: vec![
MessageContext::Text("Hello".to_string()),
MessageContext::Image(MessageImage {
url: "https://example.com/image.jpg".to_string(),
detail: None,
}),
],
}];
// Add to the prompt stream and generate response (with tool usage)
prompt_stream.add(prompt).await;
let result = prompt_stream.generate_use_tool(&config).await;
// create a prompt stream
let mut prompt_stream = client.create_prompt();
// chat loop: Get user input → Add to prompt → Generate response with tool usage
loop {
// Get user input
let mut input = String::new();
std::io::stdin().read_line(&mut input).expect("Failed to read line");
// Create the prompt
let prompt = vec![Message::User {
name: Some("user".to_string()),
content: vec![
MessageContext::Text(input.trim().to_string()),
],
}];
// Add to the prompt
prompt_stream.add(prompt).await;
// Generate response using `generate_can_use_tool`
let result = prompt_stream.generate_can_use_tool(None).await;
println!("{:?}", result);
// Optionally check the latest state of the prompt
let response = prompt_stream.prompt.clone();
println!("{:?}", response);
}
You can define any tool by implementing the Tool
trait in the function
module.
Below is an example of defining a tool that calculates the length of a text.
// ...existing code in main.rs...
impl Tool for TextLengthTool {
fn def_name(&self) -> &str {
"text_length_tool"
}
fn def_description(&self) -> &str {
"Returns the length of the input text."
}
fn def_parameters(&self) -> serde_json::Value {
serde_json::json!({
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "Input text to calculate its length"
}
},
"required": ["text"]
})
}
fn run(&self, args: serde_json::Value) -> Result<String, String> {
let text = args["text"]
.as_str()
.ok_or_else(|| "Missing 'text' parameter".to_string())?;
let length = text.len();
Ok(serde_json::json!({ "length": length }).to_string())
}
}
APIRequest
structure, which includes the model name, messages, function definitions, function call information, temperature, maximum token count, and top_p.APIResponse
structure, where you can check choices, model information, error messages, and the number of tokens used.ClientError
provides various errors such as file not found, input errors, network errors, and tool not registered.Display
implementation, useful for debugging and user notifications.cargo build
and run with cargo run
.main.rs
to interactively check AI responses and tool execution.We welcome issue reports, improvement suggestions, and pull requests.
For detailed specifications and changes, please refer to the comments in each module.
This project is licensed under the MIT License. See the LICENSE file for details.