bananaproai-com

Crates.iobananaproai-com
lib.rsbananaproai-com
version67.0.8
created_at2025-12-30 03:07:04.366573+00
updated_at2025-12-30 03:07:04.366573+00
descriptionHigh-quality integration for https://bananaproai.com/
homepagehttps://bananaproai.com/
repositoryhttps://github.com/qy-upup/bananaproai-com
max_upload_size
id2011957
size13,069
(qy-upup)

documentation

README

bananaproai-com

A Rust crate for interacting with the Banana Pro AI inference platform, providing utilities for easy model deployment and execution. This crate simplifies the process of sending requests to your Banana Pro AI models and handling the responses.

Installation

Add the following to your Cargo.toml file: toml [dependencies] bananaproai-com = "0.1.0" # Replace with the actual version

Usage Examples

Here are a few examples demonstrating how to use the bananaproai-com crate:

1. Basic Inference Request:

This example shows how to send a simple inference request to a deployed model. rust use bananaproai_com::{BananaAPIClient, InferenceRequest}; use serde_json::json;

#[tokio::main] async fn main() -> Result<(), Box> { let api_key = "YOUR_API_KEY"; // Replace with your actual API key let model_id = "YOUR_MODEL_ID"; // Replace with your actual Model ID

let client = BananaAPIClient::new(api_key);

let request_data = json!({
    "prompt": "Generate a plausible sentence."
});

let request = InferenceRequest {
    model_inputs: request_data,
    model_id: model_id.to_string(),
};

let response = client.call_model(request).await?;

println!("Response: {:?}", response);

Ok(())

}

2. Asynchronous Inference with Custom Parameters:

This example demonstrates sending an asynchronous inference request with specific parameters. rust use bananaproai_com::{BananaAPIClient, InferenceRequest}; use serde_json::json;

#[tokio::main] async fn main() -> Result<(), Box> { let api_key = "YOUR_API_KEY"; // Replace with your actual API key let model_id = "YOUR_MODEL_ID"; // Replace with your actual Model ID

let client = BananaAPIClient::new(api_key);

let request_data = json!({
    "prompt": "Translate this to French: Hello, world!",
    "max_length": 50
});

let request = InferenceRequest {
    model_inputs: request_data,
    model_id: model_id.to_string(),
};

let response = client.call_model(request).await?;

println!("Response: {:?}", response);

Ok(())

}

3. Checking Model Status:

This example illustrates how to check the status of your deployed model. rust use bananaproai_com::{BananaAPIClient};

#[tokio::main] async fn main() -> Result<(), Box> { let api_key = "YOUR_API_KEY"; // Replace with your actual API key let model_id = "YOUR_MODEL_ID"; // Replace with your actual Model ID

let client = BananaAPIClient::new(api_key);

let status = client.check_model_status(model_id).await?;

println!("Model Status: {:?}", status);

Ok(())

}

4. Handling Errors:

This example demonstrates basic error handling. rust use bananaproai_com::{BananaAPIClient, InferenceRequest, BananaError}; use serde_json::json;

#[tokio::main] async fn main() { let api_key = "YOUR_API_KEY"; // Replace with your actual API key let model_id = "YOUR_MODEL_ID"; // Replace with your actual Model ID

let client = BananaAPIClient::new(api_key);

let request_data = json!({
    "invalid_input": 123
});

let request = InferenceRequest {
    model_inputs: request_data,
    model_id: model_id.to_string(),
};

match client.call_model(request).await {
    Ok(response) => println!("Response: {:?}", response),
    Err(e) => match e {
        BananaError::APIError(status, message) => println!("API Error: Status={}, Message={}", status, message),
        BananaError::RequestError(message) => println!("Request Error: {}", message),
        BananaError::ResponseError(message) => println!("Response Error: {}", message),
        BananaError::SerdeError(message) => println!("Serde Error: {}", message),
        BananaError::Other(message) => println!("Other Error: {}", message),
    }
}

}

Feature Summary

  • Easy Inference: Simplifies the process of sending inference requests to Banana Pro AI models.
  • Asynchronous Operations: Leverages tokio for asynchronous requests, ensuring efficient execution.
  • Error Handling: Provides detailed error types for robust error management.
  • Model Status Checks: Allows you to easily check the status of your deployed models.
  • Serialization and Deserialization: Uses serde_json for easy handling of JSON data.

License

MIT

This crate is part of the bananaproai-com ecosystem. For advanced features and enterprise-grade tools, visit: https://bananaproai.com/

Commit count: 0

cargo fmt