| Crates.io | paddle-inference-rs |
| lib.rs | paddle-inference-rs |
| version | 0.1.0 |
| created_at | 2025-08-29 05:27:00.458064+00 |
| updated_at | 2025-08-29 05:27:00.458064+00 |
| description | Rust bindings for PaddlePaddle inference library |
| homepage | |
| repository | https://github.com/your-username/paddle-inference-rs |
| max_upload_size | |
| id | 1815392 |
| size | 142,005 |
Rust bindings for PaddlePaddle inference library, providing safe and ergonomic access to PaddlePaddle's C API for deep learning inference.
Add this to your Cargo.toml:
[dependencies]
paddle-inference-rs = "0.1.0"
You need to have the PaddlePaddle inference library installed. The library expects the following structure:
paddle/
├── include/
│ ├── pd_common.h
│ ├── pd_config.h
│ ├── pd_inference_api.h
│ ├── pd_predictor.h
│ ├── pd_tensor.h
│ ├── pd_types.h
│ └── pd_utils.h
└── lib/
├── paddle_inference_c.dll (Windows)
├── paddle_inference_c.so (Linux)
└── paddle_inference_c.dylib (macOS)
use paddle_inference_rs::{Config, Predictor, PrecisionType, PlaceType};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create configuration
let config = Config::new()?
.set_model("model_dir", "model_file", "params_file")?
.enable_gpu(0)?
.set_precision(PrecisionType::Float32)?
.enable_memory_optim()?;
// Create predictor
let predictor = Predictor::create(config)?;
// Get input and output names
let input_names = predictor.get_input_names();
let output_names = predictor.get_output_names();
// Prepare input data
let mut input_tensor = predictor.get_input_handle(&input_names[0])?;
input_tensor.reshape(&[1, 3, 224, 224])?;
// Copy data to tensor (example with random data)
let input_data = vec![0.0f32; 1 * 3 * 224 * 224];
input_tensor.copy_from_cpu(&input_data)?;
// Run inference
predictor.run()?;
// Get output
let output_tensor = predictor.get_output_handle(&output_names[0])?;
let output_shape = output_tensor.get_shape()?;
let mut output_data = vec![0.0f32; output_shape.iter().product()];
output_tensor.copy_to_cpu(&mut output_data)?;
println!("Inference completed successfully!");
println!("Output shape: {:?}", output_shape);
println!("Output data: {:?}", &output_data[0..5]);
Ok(())
}
use paddle_inference_rs::{Config, Predictor};
use tokio::task;
async fn async_inference() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::new()?
.set_model("model_dir", "model_file", "params_file")?
.enable_gpu(0)?;
// Run inference in a separate thread
let result = task::spawn_blocking(move || {
let predictor = Predictor::create(config)?;
predictor.run()?;
Ok(())
}).await??;
Ok(())
}
git clone https://github.com/your-username/paddle-inference-rs.git
cd paddle-inference-rs
cargo build --release
cargo build --features gen
The library provides near-native performance with minimal overhead:
Contributions are welcome! Please feel free to submit a Pull Request.
git checkout -b feature/amazing-feature)git commit -m 'Add some amazing feature')git push origin feature/amazing-feature)This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter any issues or have questions:
| paddle-inference-rs | PaddlePaddle | Rust |
|---|---|---|
| 0.1.x | 2.4+ | 1.65+ |
Made with ❤️ for the Rust and AI communities