| Crates.io | coolrouter-cpi |
| lib.rs | coolrouter-cpi |
| version | 0.1.5 |
| created_at | 2025-11-16 00:27:35.164037+00 |
| updated_at | 2025-11-16 11:05:17.35533+00 |
| description | CPI client for CoolRouter - Solana LLM inference router program |
| homepage | https://github.com/yourusername/coolrouter-cpi |
| repository | https://github.com/yourusername/coolrouter-cpi |
| max_upload_size | |
| id | 1935031 |
| size | 48,069 |
A Rust client library for Cross-Program Invocation (CPI) calls to CoolRouter on Solana.
CoolRouter is a Solana program that routes LLM inference requests. This crate provides a clean, type-safe interface for making CPI calls to CoolRouter from your Anchor programs.
Add to your Cargo.toml:
[dependencies]
coolrouter-cpi = "0.1.0"
use coolrouter_cpi::{create_llm_request, Message};
// In your instruction handler
create_llm_request(
ctx.accounts.request_pda.to_account_info(),
ctx.accounts.authority.to_account_info(),
ctx.accounts.caller_program.to_account_info(),
ctx.accounts.system_program.to_account_info(),
ctx.accounts.coolrouter_program.key(),
vec![ctx.accounts.callback_account.to_account_info()],
"request_123".to_string(),
"openai".to_string(),
"gpt-4".to_string(),
vec![Message {
role: "user".to_string(),
content: "Hello, AI!".to_string(),
}],
)?;
For more control, use the builder:
use coolrouter_cpi::CoolRouterCPI;
CoolRouterCPI::new(
request_pda,
authority,
caller_program,
system_program,
coolrouter_program_id,
)
.add_callback_account(callback_account)
.create_request(
request_id,
provider,
model_id,
messages,
)?;
use anchor_lang::prelude::*;
use coolrouter_cpi::{create_llm_request, Message};
#[program]
pub mod my_program {
use super::*;
pub fn ask_llm(
ctx: Context<AskLLM>,
request_id: String,
prompt: String,
) -> Result<()> {
// Prepare the message
let messages = vec![Message {
role: "user".to_string(),
content: prompt,
}];
// Call CoolRouter
create_llm_request(
ctx.accounts.request_pda.to_account_info(),
ctx.accounts.authority.to_account_info(),
ctx.accounts.my_program.to_account_info(),
ctx.accounts.system_program.to_account_info(),
ctx.accounts.coolrouter_program.key(),
vec![ctx.accounts.response_storage.to_account_info()],
request_id,
"openai".to_string(),
"gpt-4".to_string(),
messages,
)?;
Ok(())
}
}
#[derive(Accounts)]
pub struct AskLLM<'info> {
#[account(mut)]
pub authority: Signer<'info>,
/// CHECK: PDA for the request in CoolRouter
#[account(mut)]
pub request_pda: AccountInfo<'info>,
/// CHECK: This program's ID
pub my_program: AccountInfo<'info>,
/// CHECK: The CoolRouter program
pub coolrouter_program: AccountInfo<'info>,
/// Account where response will be stored
#[account(mut)]
pub response_storage: Account<'info, ResponseStorage>,
pub system_program: Program<'info, System>,
}
create_llm_request with a promptLicensed under either of:
at your option.
Contributions are welcome! Please feel free to submit a Pull Request.