| Crates.io | bananaproai-com |
| lib.rs | bananaproai-com |
| version | 67.0.8 |
| created_at | 2025-12-30 03:07:04.366573+00 |
| updated_at | 2025-12-30 03:07:04.366573+00 |
| description | High-quality integration for https://bananaproai.com/ |
| homepage | https://bananaproai.com/ |
| repository | https://github.com/qy-upup/bananaproai-com |
| max_upload_size | |
| id | 2011957 |
| size | 13,069 |
A Rust crate for interacting with the Banana Pro AI inference platform, providing utilities for easy model deployment and execution. This crate simplifies the process of sending requests to your Banana Pro AI models and handling the responses.
Add the following to your Cargo.toml file:
toml
[dependencies]
bananaproai-com = "0.1.0" # Replace with the actual version
Here are a few examples demonstrating how to use the bananaproai-com crate:
1. Basic Inference Request:
This example shows how to send a simple inference request to a deployed model. rust use bananaproai_com::{BananaAPIClient, InferenceRequest}; use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box
let client = BananaAPIClient::new(api_key);
let request_data = json!({
"prompt": "Generate a plausible sentence."
});
let request = InferenceRequest {
model_inputs: request_data,
model_id: model_id.to_string(),
};
let response = client.call_model(request).await?;
println!("Response: {:?}", response);
Ok(())
}
2. Asynchronous Inference with Custom Parameters:
This example demonstrates sending an asynchronous inference request with specific parameters. rust use bananaproai_com::{BananaAPIClient, InferenceRequest}; use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box
let client = BananaAPIClient::new(api_key);
let request_data = json!({
"prompt": "Translate this to French: Hello, world!",
"max_length": 50
});
let request = InferenceRequest {
model_inputs: request_data,
model_id: model_id.to_string(),
};
let response = client.call_model(request).await?;
println!("Response: {:?}", response);
Ok(())
}
3. Checking Model Status:
This example illustrates how to check the status of your deployed model. rust use bananaproai_com::{BananaAPIClient};
#[tokio::main]
async fn main() -> Result<(), Box
let client = BananaAPIClient::new(api_key);
let status = client.check_model_status(model_id).await?;
println!("Model Status: {:?}", status);
Ok(())
}
4. Handling Errors:
This example demonstrates basic error handling. rust use bananaproai_com::{BananaAPIClient, InferenceRequest, BananaError}; use serde_json::json;
#[tokio::main] async fn main() { let api_key = "YOUR_API_KEY"; // Replace with your actual API key let model_id = "YOUR_MODEL_ID"; // Replace with your actual Model ID
let client = BananaAPIClient::new(api_key);
let request_data = json!({
"invalid_input": 123
});
let request = InferenceRequest {
model_inputs: request_data,
model_id: model_id.to_string(),
};
match client.call_model(request).await {
Ok(response) => println!("Response: {:?}", response),
Err(e) => match e {
BananaError::APIError(status, message) => println!("API Error: Status={}, Message={}", status, message),
BananaError::RequestError(message) => println!("Request Error: {}", message),
BananaError::ResponseError(message) => println!("Response Error: {}", message),
BananaError::SerdeError(message) => println!("Serde Error: {}", message),
BananaError::Other(message) => println!("Other Error: {}", message),
}
}
}
tokio for asynchronous requests, ensuring efficient execution.serde_json for easy handling of JSON data.MIT
This crate is part of the bananaproai-com ecosystem. For advanced features and enterprise-grade tools, visit: https://bananaproai.com/