Crates.io | oxyde |
lib.rs | oxyde |
version | 0.1.4 |
created_at | 2025-08-11 04:04:09.755192+00 |
updated_at | 2025-09-18 12:43:54.872478+00 |
description | AI Agent SDK for Game NPCs |
homepage | |
repository | https://github.com/Oxyde-Labs/Oxyde |
max_upload_size | |
id | 1789537 |
size | 1,003,708 |
Oxyde is a revolutionary Rust-based SDK for creating autonomous, goal-driven NPCs with advanced AI and emotional intelligence. Build NPCs that pursue their own objectives, adapt to player interactions, and generate emergent storylines in real-time.
Experience goal-driven NPCs with emotional intelligence in your browser:
cd examples/rpg_demo
cargo run
# Visit http://localhost:5000
What You'll Experience:
Simple conversational interface for testing:
cargo run --example standalone_demo
Lightweight version without web interface:
rustc -o rpg_demo_standalone rpg_demo_standalone.rs
./rpg_demo_standalone
git clone <repository-url>
cd oxyde-ai-sdk
# Set up API keys (choose one or more):
export OPENAI_API_KEY="your-openai-key" # General purpose AI
export ANTHROPIC_API_KEY="your-anthropic-key" # Advanced reasoning
export GROQ_API_KEY="your-groq-key" # Fast inference
export XAI_API_KEY="your-xai-key" # Creative dialogue
export PERPLEXITY_API_KEY="your-perplexity-key" # Real-time knowledge
cargo build
cd examples/rpg_demo
cargo run
# Open http://localhost:5000 in your browser
Add Oxyde to your Rust project:
[dependencies]
oxyde = { path = "path/to/oxyde-ai-sdk" }
oxyde/
├── src/
│ ├── agent.rs # Goal-driven agent with emotional intelligence
│ ├── inference.rs # Multi-LLM provider abstraction
│ ├── memory.rs # Vector embeddings + emotional context
│ └── oxyde_game/ # Game integration modules
│ ├── behavior.rs # Autonomous NPC behaviors
│ ├── intent.rs # Player intent detection
│ └── bindings/ # Engine integration layers
└── examples/
└── rpg_demo/ # Full-featured web demo
├── emotion_engine.rs # 6D emotional tracking
├── goal_system.rs # Autonomous goal management
├── llm_service.rs # Smart provider selection
└── web_server.rs # Interactive web interface
The Oxyde SDK is designed for easy extensibility. Here's how to add a new LLM provider:
// In examples/rpg_demo/src/llm_service.rs
#[derive(Debug, Clone, PartialEq)]
pub enum LLMProvider {
OpenAI,
Anthropic,
Groq,
XAI,
Perplexity,
YourNewProvider, // Add your provider here
Local,
}
// Add API endpoint and model configuration
impl LLMProvider {
pub fn api_endpoint(&self) -> &'static str {
match self {
LLMProvider::OpenAI => "https://api.openai.com/v1/chat/completions",
LLMProvider::Anthropic => "https://api.anthropic.com/v1/messages",
LLMProvider::Groq => "https://api.groq.com/openai/v1/chat/completions",
LLMProvider::XAI => "https://api.x.ai/v1/chat/completions",
LLMProvider::Perplexity => "https://api.perplexity.ai/chat/completions",
LLMProvider::YourNewProvider => "https://api.yourprovider.com/v1/chat",
LLMProvider::Local => "http://localhost:8080/chat/completions",
}
}
pub fn default_model(&self) -> &'static str {
match self {
LLMProvider::OpenAI => "gpt-4o",
LLMProvider::Anthropic => "claude-3-5-sonnet-20241022",
LLMProvider::Groq => "llama-3.1-8b-instant",
LLMProvider::XAI => "grok-2-1212",
LLMProvider::Perplexity => "llama-3.1-sonar-small-128k-online",
LLMProvider::YourNewProvider => "your-model-name",
LLMProvider::Local => "local-model",
}
}
}
// Add provider-specific request handling in LLMService impl
async fn your_provider_request(
&self,
messages: Vec<serde_json::Value>,
system_prompt: &str,
) -> Result<String, Box<dyn std::error::Error + Send + Sync>> {
let api_key = env::var("YOUR_PROVIDER_API_KEY")
.map_err(|_| "YOUR_PROVIDER_API_KEY not found")?;
let request_body = json!({
"model": self.provider.default_model(),
"messages": messages,
"max_tokens": 1000,
"temperature": 0.7
});
let response = self.client
.post(self.provider.api_endpoint())
.header("Authorization", format!("Bearer {}", api_key))
.header("Content-Type", "application/json")
.json(&request_body)
.send()
.await?;
// Parse response based on your provider's format
let response_json: serde_json::Value = response.json().await?;
let content = response_json["choices"][0]["message"]["content"]
.as_str()
.unwrap_or("No response")
.to_string();
Ok(content)
}
// In generate_goal_driven_response method, add your provider case
match self.provider {
LLMProvider::OpenAI => self.openai_goal_driven_request(messages, system_prompt, goal_context).await,
LLMProvider::Anthropic => self.anthropic_goal_driven_request(messages, system_prompt, goal_context).await,
LLMProvider::Groq => self.groq_goal_driven_request(messages, system_prompt, goal_context).await,
LLMProvider::XAI => self.xai_goal_driven_request(messages, system_prompt, goal_context).await,
LLMProvider::Perplexity => self.perplexity_goal_driven_request(messages, system_prompt, goal_context).await,
LLMProvider::YourNewProvider => self.your_provider_request(messages, system_prompt).await,
LLMProvider::Local => self.local_request(messages, system_prompt).await,
}
// In select_optimal_provider function
pub fn select_optimal_provider(context: &str) -> LLMProvider {
// Add your provider's specialty
if context.contains("your_specialty_keyword") {
if env::var("YOUR_PROVIDER_API_KEY").is_ok() {
return LLMProvider::YourNewProvider;
}
}
// ... existing routing logic ...
// Add to fallback chain
if env::var("YOUR_PROVIDER_API_KEY").is_ok() {
LLMProvider::YourNewProvider
} else {
LLMProvider::Local
}
}
export YOUR_PROVIDER_API_KEY="your-api-key-here"
Most modern LLM providers use OpenAI-compatible formats. Simply update the endpoint and authentication:
LLMProvider::YourProvider => "https://api.yourprovider.com/v1/chat/completions",
For providers with unique formats (like Anthropic), implement custom request/response handling:
// Custom request format for unique providers
let request_body = json!({
"model": "your-model",
"messages": transform_messages_for_provider(messages),
"custom_param": "value"
});
export YOUR_PROVIDER_API_KEY="test-key"
Component | Status | Description |
---|---|---|
Goal-Driven AI System | ✅ Complete | Autonomous NPCs with personal objectives and motivation tracking |
Emotional Intelligence Engine | ✅ Complete | 6-dimensional emotional tracking with personality evolution |
Multi-LLM Architecture | ✅ Complete | OpenAI + Anthropic + Groq + xAI + Perplexity integration with intelligent provider selection |
Emergent Storytelling | ✅ Complete | Dynamic story event generation from NPC goal progress |
Web RPG Demo | ✅ Complete | Interactive browser-based demo with real-time AI |
Memory System | ✅ Complete | Vector embeddings with emotional context integration |
Agent System | ✅ Complete | Core agent implementation with advanced state management |
Behavior System | ✅ Complete | Autonomous behaviors (dialogue, goal pursuit, adaptation) |
Engine Bindings | ⚠️ Partial | Unity and WASM bindings available, Unreal in development |
✅ Fully Operational:
⚠️ In Development:
Oxyde represents a breakthrough in game AI - moving beyond scripted responses to truly autonomous NPCs that:
This creates gameplay experiences that are genuinely unpredictable and personally meaningful to each player.