| Crates.io | openai_rust_sdk |
| lib.rs | openai_rust_sdk |
| version | 1.3.0 |
| created_at | 2025-08-23 18:58:52.507994+00 |
| updated_at | 2025-10-10 18:05:11.165938+00 |
| description | Comprehensive OpenAI API SDK for Rust with YARA rule validation |
| homepage | |
| repository | https://github.com/threatflux/openai_rust_sdk |
| max_upload_size | |
| id | 1807789 |
| size | 2,893,822 |
A comprehensive Rust SDK for the OpenAI API with integrated YARA-X rule validation testing. This library provides complete access to all OpenAI APIs including Chat, Assistants, Batch processing, and more, with special capabilities for testing AI models' ability to generate valid YARA rules.
Developed by Wyatt Roersma and Claude Code.
✅ Complete OpenAI API Support
✅ YARA-X Integration
✅ Testing Framework
Add to your Cargo.toml:
[dependencies]
openai_rust_sdk = "0.1.0"
export OPENAI_API_KEY=your_api_key_here
# Optional: target a non-default OpenAI-compatible endpoint (e.g., proxy or hosted variant)
export OPENAI_BASE_URL=https://my-openai-proxy.example.com/v1
With both variables set, OpenAIClient::from_env() will authenticate with your key and route
requests through the alternate base URL automatically.
cargo run -- generate-batch basic output.jsonl
cargo run -- validate-rule rule.yar
cargo run -- run-tests
use openai_rust_sdk::testing::{
batch_generator::BatchJobGenerator,
yara_validator::YaraValidator,
};
fn main() {
// Generate batch job
let generator = BatchJobGenerator::new(Some("gpt-5-nano".to_string()));
let batch_file = std::path::Path::new("test_batch.jsonl");
generator.generate_test_suite(batch_file, "basic").unwrap();
// Validate a YARA rule
let rule = r#"
rule DetectMalware {
strings:
$a = "malware"
condition:
$a
}
"#;
let validator = YaraValidator::new();
let result = validator.validate_rule(rule).unwrap();
if result.is_valid {
println!("✓ Rule is valid!");
}
}
The SDK now ships with a native client for the /v1/responses endpoints. You can access
the full feature set--including conversations, background execution, and structured
outputs--through the new builder and client helpers:
use futures::StreamExt;
use openai_rust_sdk::{
CreateResponseRequest, OpenAIClient,
ResponsesApiServiceTier as ServiceTier,
};
# tokio_test::block_on(async {
let client = OpenAIClient::new(std::env::var("OPENAI_API_KEY")?)?;
let request = CreateResponseRequest::new_text("gpt-4o-mini", "Summarize Rust ownership")
.with_service_tier(ServiceTier::Auto)
.with_store(true);
let response = client.create_response_v2(&request).await?;
println!("Summary: {}", response.output_text());
// Stream events with strong typing
let mut stream = client.stream_response_v2(&request).await?;
while let Some(event) = stream.next().await {
match event? {
openai_rust_sdk::ResponseStreamEvent::OutputTextDelta { delta, .. } => {
print!("{}", delta);
}
openai_rust_sdk::ResponseStreamEvent::ResponseCompleted { .. } => println!("\nDone!"),
_ => {}
}
}
# Ok::<(), Box<dyn std::error::Error>>(())
# })?;
Compatibility helpers such as generate_text, create_chat_completion, and
create_custom_response automatically route through the Responses API to maintain
the existing interface while unlocking new functionality.
# tokio_test::block_on(async {
use openai_rust_sdk::{from_env, CreateResponseRequest};
// Set OPENAI_API_KEY and optionally OPENAI_BASE_URL before running.
let client = from_env()?;
let response = client
.create_response_v2(&CreateResponseRequest::new_text(
"gpt-4o-mini",
"Send this request through my proxy",
))
.await?;
println!("{}", response.output_text());
# Ok::<(), Box<dyn std::error::Error>>(())
# })?;
When OPENAI_BASE_URL is supplied, the client automatically routes requests through that
endpoint instead of the default https://api.openai.com.
The SDK includes three test suites for different complexity levels:
openai_rust_sdk/
├── src/
│ ├── lib.rs # Library entry point
│ ├── main.rs # CLI application
│ └── testing/
│ ├── mod.rs # Testing module exports
│ ├── yara_validator.rs # YARA-X validation
│ ├── test_cases.rs # Built-in test cases
│ └── batch_generator.rs # Batch job generation
├── examples/
│ └── full_integration.rs # Complete usage example
├── test_data/
│ ├── yara_x_questions.jsonl # Sample questions
│ └── simple_batch.jsonl # Basic test batch
└── tests/
└── integration_test.rs # Integration tests
# Validate a single YARA rule
cargo run -- validate-rule path/to/rule.yar
# Run the built-in test suite
cargo run -- run-tests
# Generate batch job for basic testing
cargo run -- generate-batch basic output.jsonl
# Generate batch job for malware detection
cargo run -- generate-batch malware output.jsonl
# Generate comprehensive test batch
cargo run -- generate-batch comprehensive output.jsonl
The SDK is configured to use gpt-5-nano for testing, which provides fast and cost-effective rule generation. Example batch request:
{
"custom_id": "yara_001",
"method": "POST",
"url": "/v1/chat/completions",
"body": {
"model": "gpt-5-nano",
"messages": [
{
"role": "system",
"content": "You are an expert YARA rule developer."
},
{
"role": "user",
"content": "Create a YARA rule to detect UPX-packed PE files."
}
],
"max_tokens": 1000,
"temperature": 0.3
}
}
The validator provides comprehensive metrics:
cargo build --release
cargo test
cargo fmt
cargo clippy -- -D warnings
MIT
Contributions are welcome! Please ensure all tests pass and code is properly formatted before submitting PRs.