| Crates.io | solana-llm-oracle |
| lib.rs | solana-llm-oracle |
| version | 0.1.0 |
| created_at | 2025-12-18 15:04:46.077629+00 |
| updated_at | 2025-12-18 15:04:46.077629+00 |
| description | LLM oracle for inference in solana programs |
| homepage | |
| repository | https://github.com/GauravBurande/solana-llm-oracle |
| max_upload_size | |
| id | 1992583 |
| size | 79,194 |
On-chain LLM inference for Solana programs via CPI callbacks
This repository provides a Solana-native AI oracle that allows smart contracts to request LLM inference, receive results asynchronously, and process responses on-chain via verified callbacks.
It is designed for:
The core crate is solana-llm-oracle, which exposes a CPI interface that Anchor programs can safely integrate.
This pattern ensures:
Add the oracle crate to your Anchor program with CPI enabled:
cargo add solana-llm-oracle --features cpi
Define your agent prompt and create a chat context via CPI.
use anchor_lang::prelude::*;
use solana_llm_oracle::cpi::{
accounts::CreateChat,
create_chat,
};
const AGENT_DESC: &str = "You are a helpful assistant.";
pub fn initialize(ctx: Context<Initialize>, seed: u8) -> Result<()> {
// Store the chat context on your agent account
ctx.accounts.agent.chat_context = ctx.accounts.chat_context.key();
ctx.accounts.agent.bump = ctx.bumps.agent;
let cpi_program = ctx.accounts.oracle_program.to_account_info();
let cpi_accounts = CreateChat {
user: ctx.accounts.signer.to_account_info(),
chat_context: ctx.accounts.chat_context.to_account_info(),
system_program: ctx.accounts.system_program.to_account_info(),
};
let cpi_ctx = CpiContext::new(cpi_program, cpi_accounts);
create_chat(cpi_ctx, AGENT_DESC.to_string(), seed)?;
Ok(())
}
This creates a persistent AI agent context on-chain that can be reused for multiple interactions.
To request inference, call create_llm_inference via CPI.
use solana_llm_oracle::cpi::{
accounts::CreateLlmInference,
create_llm_inference,
};
use solana_llm_oracle::state::AccountMeta;
pub fn chat_with_llm(ctx: Context<ChatWithLlm>, text: String) -> Result<()> {
let cpi_program = ctx.accounts.oracle_program.to_account_info();
let cpi_accounts = CreateLlmInference {
user: ctx.accounts.user.to_account_info(),
inference: ctx.accounts.inference.to_account_info(),
chat_context: ctx.accounts.chat_context.to_account_info(),
system_program: ctx.accounts.system_program.to_account_info(),
};
let cpi_ctx = CpiContext::new(cpi_program, cpi_accounts);
// Callback discriminator (must be exactly 8 bytes)
let callback_discriminator: [u8; 8] =
instruction::CallbackFromLlm::DISCRIMINATOR
.try_into()
.expect("Invalid discriminator");
create_llm_inference(
cpi_ctx,
text,
crate::ID, // callback program id
callback_discriminator, // callback instruction
Some(vec![
AccountMeta {
pubkey: ctx.accounts.user.key(),
is_signer: false,
is_writable: false,
},
AccountMeta {
pubkey: ctx.accounts.cred_score.key(),
is_signer: false,
is_writable: true,
},
]),
)?;
Ok(())
}
The last argument is Option<Vec<AccountMeta>>
Some(...) → pass extra accounts to the callbackNone → callback only receives required accountsThis allows dynamic account routing to your callback
The oracle will invoke your callback instruction with the LLM response.
The Config account MUST be the first account in the callback context.
pub fn callback_from_llm(
ctx: Context<CallbackFromLlm>,
response: String,
) -> Result<()> {
// Verify oracle identity
if !ctx.accounts.config.to_account_info().is_signer {
return Err(ProgramError::InvalidAccountData.into());
}
msg!("AI response received: {}", response);
// Example: parse numeric output
let parsed_score: u8 = response.trim().parse::<u8>().map_err(|_| {
msg!("Failed to parse AI response");
ProgramError::InvalidInstructionData
})?;
let cred_score = &mut ctx.accounts.cred_score;
cred_score.score = parsed_score.min(100);
Ok(())
}
This repository includes a complete working example:
defi-score-agent-exampleA Solana program that:
Agent Prompt
You are a DeFi Credit Agent. Analyze a user's Twitter profile
and activity to infer their on-chain reputation, trustworthiness,
and DeFi literacy. Output a single DeFi Credit Score (0–100)
as an integer. Only return the number.
This example demonstrates:
➡️ See: programs/defi-score-agent-example
Agents can be designed to return:
For advanced use cases, see instruction-emitting agents (e.g. token minters, governors, routers).
anchor build
anchor test
Think of this program as a long-running oracle daemon:
┌─────────────┐
│ Solana Prog │
│ (on-chain) │
│ Inference │◄──── user tx
│ Context PDA │
└──────┬──────┘
│
│ program_subscribe (WebSocket)
▼
┌───────────────────────────┐
│ Oracle Process (this code)│
│ │
│ 1. Detect new Inference │
│ 2. Deserialize │
│ 3. Call LLM, get response │
│ 4. Build callback ix │
│ 5. Send tx back to Solana │
└──────────┬────────────────┘
│
▼
┌─────────────────┐
│ Callback IX │
│ marks processed │
│ writes response │
└─────────────────┘
This is NOT a request/response server. It is a state-watcher + executor.