| Crates.io | valyu |
| lib.rs | valyu |
| version | 0.3.1 |
| created_at | 2025-10-17 17:24:18.854977+00 |
| updated_at | 2025-12-19 17:14:46.963768+00 |
| description | Official Rust SDK for the Valyu AI API |
| homepage | https://valyu.ai |
| repository | https://github.com/valyu-network/valyu-rust |
| max_upload_size | |
| id | 1887956 |
| size | 172,363 |
Official Rust SDK for the Valyu AI API.
Search for AIs - Valyu's Deepsearch API gives AI the context it needs. Integrate trusted, high-quality public and proprietary sources, with full-text multimodal retrieval.
Get $10 free credits for the Valyu API when you sign up at platform.valyu.ai! No credit card required.
⚠️ Alpha Release: This SDK is currently in alpha. The API is stable, but some features and interfaces may change based on user feedback. We welcome your input!
We do all the heavy lifting for you - one unified API for all data:
Add this to your Cargo.toml:
[dependencies]
valyu = "0.1"
tokio = { version = "1", features = ["full"] }
Get your API key from platform.valyu.ai.
use valyu::ValyuClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a client with your API key
let client = ValyuClient::new("your-api-key");
// Perform a simple search
let response = client.search("quantum computing").await?;
// Process results
if let Some(results) = &response.results {
for result in results {
println!("{}: {}",
result.title.as_deref().unwrap_or("Untitled"),
result.url.as_deref().unwrap_or("No URL")
);
}
}
Ok(())
}
use valyu::{ValyuClient, DeepSearchRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = ValyuClient::new("your-api-key");
// Build a custom search request with specific parameters
let request = DeepSearchRequest::new("artificial intelligence")
.with_max_results(10)
.with_search_type("web")
.with_fast_mode(true)
.with_response_length("medium")
.with_relevance_threshold(0.7)
.with_date_range("2024-01-01", "2024-12-31");
let response = client.deep_search(&request).await?;
println!("Transaction ID: {}", response.tx_id.as_deref().unwrap_or("N/A"));
println!("Cost: ${:.4}", response.total_deduction_dollars.unwrap_or(0.0));
Ok(())
}
Set your API key in one of these ways:
export VALYU_API_KEY="your-api-key-here"
Then in your code:
use std::env;
use valyu::ValyuClient;
let api_key = env::var("VALYU_API_KEY").expect("VALYU_API_KEY must be set");
let client = ValyuClient::new(api_key);
use valyu::ValyuClient;
let client = ValyuClient::new("your-api-key-here");
For local development, use the dotenvy crate:
[dependencies]
dotenvy = "0.15"
use std::env;
use valyu::ValyuClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = env::var("VALYU_API_KEY")?;
let client = ValyuClient::new(api_key);
Ok(())
}
The SDK uses a custom ValyuError type for detailed error handling:
use valyu::{ValyuClient, ValyuError};
#[tokio::main]
async fn main() {
let client = ValyuClient::new("your-api-key");
match client.search("test").await {
Ok(response) => {
if response.success {
println!("Success! Found {} results",
response.results.as_ref().map(|r| r.len()).unwrap_or(0));
} else {
eprintln!("API returned error: {:?}", response.error);
}
}
Err(ValyuError::InvalidApiKey) => eprintln!("Invalid API key provided"),
Err(ValyuError::RateLimitExceeded) => eprintln!("Rate limit exceeded - please retry later"),
Err(ValyuError::ServiceUnavailable) => eprintln!("Service temporarily unavailable"),
Err(ValyuError::InvalidRequest(msg)) => eprintln!("Invalid request: {}", msg),
Err(e) => eprintln!("Error: {}", e),
}
}
use valyu::ValyuClient;
use std::time::Duration;
let http_client = reqwest::Client::builder()
.timeout(Duration::from_secs(30))
.build()
.unwrap();
let client = ValyuClient::with_client("your-api-key", http_client);
The main client for interacting with the Valyu API.
new(api_key: impl Into<String>) -> Self - Create a new client with an API keywith_base_url(api_key, base_url) -> Self - Create client with custom base URLwith_client(api_key, reqwest::Client) -> Self - Create client with custom HTTP clientsearch(query: impl Into<String>) -> Result<DeepSearchResponse> - Simple search with default settingsdeep_search(request: &DeepSearchRequest) -> Result<DeepSearchResponse> - Advanced search with custom parameterscontents(request: &ContentsRequest) -> Result<ContentsResponse> - Extract content from URLsanswer(request: &AnswerRequest) -> Result<AnswerResponse> - Get AI-powered answersask(query: impl Into<String>) -> Result<AnswerResponse> - Simple answer with defaultsdeepresearch_create(request: &DeepResearchCreateRequest) -> Result<DeepResearchCreateResponse> - Create async research taskdeepresearch_status(task_id) -> Result<DeepResearchStatusResponse> - Get task statusdeepresearch_wait(task_id, poll_interval_secs, max_wait_secs) -> Result<DeepResearchStatusResponse> - Wait for task completiondeepresearch_list(api_key_id, limit) -> Result<DeepResearchListResponse> - List tasksdeepresearch_cancel(task_id) -> Result<DeepResearchOperationResponse> - Cancel running taskdeepresearch_delete(task_id) -> Result<DeepResearchOperationResponse> - Delete taskresearch(query: impl Into<String>) -> Result<DeepResearchCreateResponse> - Simple research with defaultsBuilder for constructing search requests with optional parameters.
new(query: impl Into<String>) -> Self - Create a new request with just a querywith_max_results(max: u8) -> Self - Set max results (1-20)with_search_type(type: impl Into<String>) -> Self - Set search type: "all", "web", or "proprietary"with_fast_mode(enabled: bool) -> Self - Enable fast mode for reduced latencywith_response_length(length: impl Into<String>) -> Self - Set response length: "short", "medium", "large", or "max"with_relevance_threshold(threshold: f64) -> Self - Set relevance threshold (0.0-1.0)with_included_sources(sources: Vec<String>) -> Self - Specify sources to includewith_excluded_sources(sources: Vec<String>) -> Self - Specify sources to excludewith_date_range(start: impl Into<String>, end: impl Into<String>) -> Self - Set date range (YYYY-MM-DD format)with_max_price(price: f64) -> Self - Set maximum CPM pricewith_category(category: impl Into<String>) -> Self - Set category filterwith_country_code(code: impl Into<String>) -> Self - Set country code (2-letter ISO)with_is_tool_call(is_tool_call: bool) -> Self - Set whether this is a tool callBuilder for URL content extraction requests.
new(urls: Vec<String>) -> Self - Create request with URLs (1-10)with_response_length(length: impl Into<String>) -> Self - Set response length presetwith_custom_response_length(chars: i32) -> Self - Set custom character limit (1K-1M)with_extract_effort(effort: impl Into<String>) -> Self - Set extraction effort: "normal", "high", or "auto"with_summary(enabled: bool) -> Self - Enable/disable default summarizationwith_summary_instructions(instructions: impl Into<String>) -> Self - Set custom summary instructionswith_summary_schema(schema: serde_json::Value) -> Self - Set JSON schema for structured extractionwith_max_price_dollars(max_price: f64) -> Self - Set maximum price in dollarsBuilder for AI-powered answer requests.
new(query: impl Into<String>) -> Self - Create request with querywith_system_instructions(instructions: impl Into<String>) -> Self - Set custom AI instructions (max 2000 chars)with_structured_output(schema: serde_json::Value) -> Self - Set JSON schema for structured responsewith_search_type(type: impl Into<String>) -> Self - Set search type: "all", "web", or "proprietary"with_fast_mode(enabled: bool) -> Self - Enable fast modewith_data_max_price(price: f64) -> Self - Set maximum data CPM pricewith_included_sources(sources: Vec<String>) -> Self - Set included sourceswith_excluded_sources(sources: Vec<String>) -> Self - Set excluded sourceswith_date_range(start, end) -> Self - Set date range filterwith_country_code(code: impl Into<String>) -> Self - Set country codeFields:
success: bool - Whether the request succeedederror: Option<String> - Error message if failedtx_id: Option<String> - Transaction IDquery: Option<String> - The search queryresults: Option<Vec<SearchResult>> - Array of search resultsresults_by_source: Option<ResultsBySource> - Breakdown of results by sourcetotal_deduction_dollars: Option<f64> - Cost in dollarstotal_characters: Option<i32> - Total characters in resultsIndividual search result with fields including:
title: Option<String> - Result titleurl: Option<String> - Result URLcontent: Option<String> - Result content/snippetsource: Option<String> - Source typepublication_date: Option<String> - Publication dateauthors: Option<Vec<String>> - List of authorscitation: Option<String> - Citation informationFields:
success: bool - Whether request succeededresults: Option<Vec<ContentResult>> - Extracted content resultsurls_requested: Option<i32> - Number of URLs requestedurls_processed: Option<i32> - Number successfully processedurls_failed: Option<i32> - Number that failedtotal_cost_dollars: Option<f64> - Total costFields:
success: bool - Whether request succeededcontents: Option<serde_json::Value> - AI-generated answer (string or structured)data_type: Option<String> - "unstructured" or "structured"search_results: Option<Vec<AnswerSearchResult>> - Sources usedsearch_metadata: Option<AnswerSearchMetadata> - Search metadataai_usage: Option<AiUsage> - Token usage statisticscost: Option<AnswerCost> - Cost breakdown (search + AI)Builder for creating comprehensive async research tasks.
new(input: impl Into<String>) -> Self - Create a new research requestwith_mode(mode: DeepResearchMode) -> Self - Set research mode: Fast, Lite, or Heavywith_output_formats(formats: Vec<String>) -> Self - Set output formats: ["markdown"], ["markdown", "pdf"]with_structured_output(schema: serde_json::Value) -> Self - Use JSON schema for structured outputwith_strategy(strategy: impl Into<String>) -> Self - Set natural language research strategywith_search(config: DeepResearchSearchConfig) -> Self - Set search configurationwith_urls(urls: Vec<String>) -> Self - Add URLs to extract content from (max 10)with_files(files: Vec<DeepResearchFileAttachment>) -> Self - Add file attachments (max 10)with_mcp_servers(servers: Vec<DeepResearchMCPServerConfig>) -> Self - Add MCP servers (max 5)with_code_execution(enabled: bool) -> Self - Enable/disable code executionwith_previous_reports(ids: Vec<String>) -> Self - Use previous reports as context (max 3)with_webhook_url(url: impl Into<String>) -> Self - Set webhook for completion notificationwith_metadata(metadata: serde_json::Value) -> Self - Set custom metadataResearch mode options:
Fast - Quick lookups, simple questions (1-2 min, $0.15)Lite - Moderate research depth (5-10 min, $0.50)Heavy - Comprehensive analysis (15-90 min, $1.50)Fields:
success: bool - Whether request succeededdeepresearch_id: Option<String> - Task identifierstatus: Option<DeepResearchStatus> - Current status: Queued, Running, Completed, Failed, Cancelledquery: Option<String> - Original querymode: Option<DeepResearchMode> - Research mode usedprogress: Option<DeepResearchProgress> - Current/total steps (when running)output: Option<serde_json::Value> - Research output (when completed)pdf_url: Option<String> - PDF download URL (if requested)images: Option<Vec<DeepResearchImage>> - Generated imagessources: Option<Vec<DeepResearchSource>> - Sources usedusage: Option<DeepResearchUsage> - Cost breakdownThe repository includes several examples demonstrating different use cases:
cargo run --example basic
use valyu::ValyuClient;
let client = ValyuClient::new("your-api-key");
let response = client.search("quantum computing").await?;
println!("Found {} results", response.results.as_ref().map(|r| r.len()).unwrap_or(0));
Search academic papers on specific topics:
use valyu::DeepSearchRequest;
let request = DeepSearchRequest::new("transformer architecture improvements")
.with_search_type("proprietary")
.with_included_sources(vec!["valyu/valyu-arxiv".to_string()])
.with_relevance_threshold(0.7)
.with_max_results(10);
let response = client.deep_search(&request).await?;
let request = DeepSearchRequest::new("AI safety developments")
.with_search_type("web")
.with_date_range("2024-01-01", "2024-12-31")
.with_max_results(5);
let response = client.deep_search(&request).await?;
Search both web and proprietary sources:
let request = DeepSearchRequest::new("quantum computing breakthroughs")
.with_search_type("all")
.with_category("technology")
.with_relevance_threshold(0.6)
.with_max_price(50.0);
let response = client.deep_search(&request).await?;
let response = client.search("climate change solutions").await?;
if response.success {
println!("Search cost: ${:.4}", response.total_deduction_dollars.unwrap_or(0.0));
if let Some(by_source) = &response.results_by_source {
println!("Sources: Web={:?}, Proprietary={:?}",
by_source.web, by_source.proprietary);
}
if let Some(results) = &response.results {
for (i, result) in results.iter().enumerate() {
println!("\n{}. {}", i + 1, result.title.as_deref().unwrap_or("Untitled"));
println!(" Source: {}", result.source.as_deref().unwrap_or("Unknown"));
if let Some(content) = &result.content {
println!(" Content: {}...", &content[..200.min(content.len())]);
}
}
}
}
cargo run --example contents
Basic content extraction from URLs:
use valyu::ContentsRequest;
let request = ContentsRequest::new(vec![
"https://example.com/article".to_string(),
]);
let response = client.contents(&request).await?;
if let Some(results) = &response.results {
for result in results {
println!("Title: {}", result.title.as_deref().unwrap_or("Untitled"));
if let Some(content) = &result.content {
println!("Content: {:?}", content);
}
}
}
let request = ContentsRequest::new(vec![
"https://docs.python.org/3/tutorial/".to_string(),
])
.with_summary(true)
.with_response_length("max");
let response = client.contents(&request).await?;
use serde_json::json;
let company_schema = json!({
"type": "object",
"properties": {
"company_name": {"type": "string"},
"founded_year": {"type": "integer"},
"key_products": {
"type": "array",
"items": {"type": "string"},
"maxItems": 3
}
}
});
let request = ContentsRequest::new(vec![
"https://en.wikipedia.org/wiki/OpenAI".to_string(),
])
.with_summary_schema(company_schema)
.with_response_length("max");
let response = client.contents(&request).await?;
cargo run --example answer
Get AI-generated answers with sources:
use valyu::AnswerRequest;
let request = AnswerRequest::new("What are the latest developments in quantum computing?")
.with_search_type("web")
.with_system_instructions("Focus on breakthroughs from 2024");
let response = client.answer(&request).await?;
if let Some(contents) = &response.contents {
println!("Answer: {}", contents);
}
if let Some(sources) = &response.search_results {
println!("\nSources ({}):", sources.len());
for source in sources {
println!(" - {}", source.title.as_deref().unwrap_or("Untitled"));
}
}
cargo run --example answer_structured
use serde_json::json;
let schema = json!({
"type": "object",
"properties": {
"summary": {"type": "string"},
"key_points": {
"type": "array",
"items": {"type": "string"}
}
}
});
let request = AnswerRequest::new("quantum computing")
.with_structured_output(schema);
let response = client.answer(&request).await?;
let request = ContentsRequest::new(vec![
"https://www.valyu.ai/".to_string(),
"https://docs.valyu.ai/overview".to_string(),
"https://www.valyu.ai/blogs/why-ai-agents-and-llms-struggle-with-search-and-data-access".to_string(),
])
.with_summary_instructions("Provide key takeaways in bullet points")
.with_max_price_dollars(2.0);
let response = client.contents(&request).await?;
println!("Processed {}/{} URLs",
response.urls_processed.unwrap_or(0),
response.urls_requested.unwrap_or(0));
println!("Cost: ${:.4}", response.total_cost_dollars.unwrap_or(0.0));
cargo run --example deepresearch
Perform comprehensive async research:
use valyu::{DeepResearchCreateRequest, DeepResearchMode};
// Create a research task
let request = DeepResearchCreateRequest::new("What are the key differences between RAG and fine-tuning?")
.with_mode(DeepResearchMode::Lite)
.with_output_formats(vec!["markdown".to_string()]);
let task = client.deepresearch_create(&request).await?;
println!("Task created: {:?}", task.deepresearch_id);
// Wait for completion
let result = client.deepresearch_wait(
task.deepresearch_id.as_ref().unwrap(),
5, // Poll every 5 seconds
900, // Timeout after 15 minutes
).await?;
// Access results
if let Some(output) = &result.output {
println!("Research output: {}", output);
}
if let Some(sources) = &result.sources {
println!("Used {} sources", sources.len());
}
if let Some(usage) = &result.usage {
println!("Total cost: ${:.4}", usage.total_cost);
}
# Basic search example
cargo run --example basic
# Advanced search with custom parameters
cargo run --example advanced
# Content extraction from URLs
cargo run --example contents
# AI-powered answers
cargo run --example answer
# Structured answer output
cargo run --example answer_structured
# DeepResearch async research tasks
cargo run --example deepresearch
# Custom HTTP client configuration
cargo run --example custom_client
Note: You'll need to create a .env file with your API key:
cp .env.example .env
# Edit .env and add your VALYU_API_KEY
As an alpha release, there are some known limitations:
No built-in retry logic: The SDK does not automatically retry requests that fail due to transient errors (503 Service Unavailable, 429 Rate Limit). Implement your own retry logic with exponential backoff if needed.
No automatic rate limiting: Rate limit management is left to the user. If you receive 429 errors, implement delays between requests or use a rate limiting library.
No streaming support: All responses are returned as complete objects. If the API adds streaming in the future, SDK updates will be required.
Limited client-side validation: The SDK performs minimal validation on request parameters (e.g., checking ranges for max_num_results). Invalid parameters will result in API errors.
No pagination support: If the API adds pagination for large result sets in the future, SDK updates will be needed.
Alpha status: As an alpha release, some APIs and type signatures may change based on user feedback. We will follow semantic versioning for breaking changes.
use valyu::{ValyuClient, ValyuError};
use std::time::Duration;
use tokio::time::sleep;
async fn search_with_retry(
client: &ValyuClient,
query: &str,
max_retries: u32,
) -> Result<valyu::DeepSearchResponse, ValyuError> {
let mut retries = 0;
loop {
match client.search(query).await {
Ok(response) => return Ok(response),
Err(ValyuError::RateLimitExceeded) | Err(ValyuError::ServiceUnavailable) => {
if retries >= max_retries {
return Err(ValyuError::ServiceUnavailable);
}
retries += 1;
let delay = Duration::from_secs(2_u64.pow(retries));
sleep(delay).await;
}
Err(e) => return Err(e),
}
}
}
use tokio::time::{sleep, Duration};
// Simple rate limiter: max 10 requests per second
let mut last_request = std::time::Instant::now();
let min_interval = Duration::from_millis(100);
for query in queries {
let elapsed = last_request.elapsed();
if elapsed < min_interval {
sleep(min_interval - elapsed).await;
}
let response = client.search(&query).await?;
last_request = std::time::Instant::now();
}
We welcome feedback on these limitations and suggestions for improvement!
valyu = "0.1" to your Cargo.tomlThis crate requires Rust 1.70 or later.
Licensed under either of:
at your option.
Contributions are welcome! Please feel free to submit a Pull Request.
Before submitting a PR, please make sure to:
cargo test to ensure all tests passcargo fmt to format your codecargo clippy to check for common mistakesThis SDK is in alpha and we value your feedback! Please open an issue on GitHub to:
Your input helps us improve the SDK for everyone.