| Crates.io | salesforce-client |
| lib.rs | salesforce-client |
| version | 0.2.0 |
| created_at | 2026-01-08 14:00:23.846968+00 |
| updated_at | 2026-01-08 14:00:23.846968+00 |
| description | Production-grade Salesforce REST API client with OAuth auto-refresh, caching, retry logic, and rate limiting |
| homepage | https://github.com/Alkaness/salesforce-client |
| repository | https://github.com/Alkaness/salesforce-client |
| max_upload_size | |
| id | 2030275 |
| size | 179,825 |
A production-grade, type-driven Salesforce REST API client library for Rust.
This library provides a comprehensive Salesforce REST API client implementation in Rust, designed for production use with enterprise-grade features including automatic OAuth token management, intelligent caching, retry logic, and rate limiting.
Add to your Cargo.toml:
[dependencies]
salesforce-client = "0.2.0"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
use salesforce_client::{SalesforceClient, ClientConfig, SfError};
use serde::Deserialize;
#[derive(Debug, Clone, Deserialize, Serialize)]
struct Account {
#[serde(rename = "Id")]
id: String,
#[serde(rename = "Name")]
name: String,
#[serde(rename = "AnnualRevenue")]
annual_revenue: Option<f64>,
}
#[tokio::main]
async fn main() -> Result<(), SfError> {
let config = ClientConfig::new(
"https://yourinstance.salesforce.com",
"your_access_token",
);
let client = SalesforceClient::new(config);
let accounts: Vec<Account> = client
.query("SELECT Id, Name, AnnualRevenue FROM Account LIMIT 10")
.await?;
for account in accounts {
println!("{}: {:?}", account.name, account.annual_revenue);
}
Ok(())
}
use salesforce_client::{SalesforceClient, OAuthCredentials};
let credentials = OAuthCredentials {
client_id: "your_client_id".to_string(),
client_secret: "your_client_secret".to_string(),
refresh_token: Some("your_refresh_token".to_string()),
username: None,
password: None,
};
let client = SalesforceClient::with_oauth(credentials).await?;
// Create
#[derive(Serialize)]
struct NewAccount {
#[serde(rename = "Name")]
name: String,
}
let new_account = NewAccount { name: "Acme Corporation".to_string() };
let response = client.insert("Account", &new_account).await?;
println!("Created: {}", response.id);
// Update
#[derive(Serialize)]
struct AccountUpdate {
#[serde(rename = "Name")]
name: String,
}
let update = AccountUpdate { name: "Acme Corp".to_string() };
client.update("Account", "001xx000003DGbX", &update).await?;
// Delete
client.delete("Account", "001xx000003DGbX").await?;
// Upsert
let upsert = UpsertBuilder::new("External_Id__c", "EXT-12345");
client.upsert("Account", upsert, &account_data).await?;
// Automatic pagination - fetches all records
let all_accounts: Vec<Account> = client
.query_all("SELECT Id, Name FROM Account")
.await?;
// Manual pagination with streaming iterator
let mut pages = client.query_paginated::<Account>("SELECT Id FROM Account").await?;
while let Some(batch) = pages.next().await? {
for account in batch {
process_account(account);
}
}
use salesforce_client::QueryBuilder;
let query = QueryBuilder::select(&["Id", "Name", "AnnualRevenue"])
.from("Account")
.where_clause("AnnualRevenue > 1000000")
.and("Industry = 'Technology'")
.order_by_desc("AnnualRevenue")
.limit(10)
.build();
let accounts: Vec<Account> = client.query(&query).await?;
auth.rs - OAuth 2.0 authentication and token management (200 lines)cache.rs - Query and record caching with TTL/TTI (350 lines)crud.rs - CRUD operation implementations (250 lines)error.rs - Comprehensive error type definitions (60 lines)pagination.rs - Automatic pagination handling (180 lines)query_builder.rs - Type-safe query construction (300 lines)rate_limit.rs - API rate limiting (200 lines)retry.rs - Retry logic with exponential backoff (180 lines)lib.rs - Main client and integration (650 lines)The main client struct that orchestrates all operations:
pub struct SalesforceClient {
config: Arc<ClientConfig>,
http_client: reqwest::Client,
query_cache: Arc<QueryCache>,
rate_limiter: Arc<RateLimiter>,
crud: Arc<crud::CrudOperations>,
}
Configuration builder for customizing client behavior:
let config = ClientConfig::new(base_url, access_token)
.with_retry(RetryConfig::new().max_retries(3))
.with_cache(CacheConfig::new().ttl(Duration::from_secs(300)))
.with_rate_limit(RateLimitConfig::new().requests_per_second(4));
Comprehensive error handling with context:
pub enum SfError {
Network(reqwest::Error),
Serialization(serde_json::Error),
Api { status: u16, body: String },
Auth(String),
RateLimit { retry_after: Option<u64> },
NotFound { sobject: String, id: String },
InvalidQuery(String),
Config(String),
Cache(String),
Timeout { seconds: u64 },
}
query<T>(&self, soql: impl AsRef<str>) -> SfResult<Vec<T>>Executes a SOQL query with automatic caching, retry, and rate limiting.
Parameters:
soql - SOQL query stringReturns:
Result<Vec<T>, SfError> - Deserialized records or errorFeatures:
query_all<T>(&self, soql: impl AsRef<str>) -> SfResult<Vec<T>>Fetches all records with automatic pagination.
Warning: Loads all results into memory. For very large datasets (>100k records), use query_paginated instead.
query_paginated<T>(&self, soql: &str) -> SfResult<PaginatedQuery<T>>Returns an iterator for manual pagination control. Most memory-efficient option for large datasets.
insert<T: Serialize>(&self, sobject: &str, data: &T) -> SfResult<InsertResponse>Creates a new record.
Parameters:
sobject - Salesforce object type (e.g., "Account")data - Record data to insertReturns:
InsertResponse containing the new record IDupdate<T: Serialize>(&self, sobject: &str, id: &str, data: &T) -> SfResult<()>Updates an existing record.
delete(&self, sobject: &str, id: &str) -> SfResult<()>Deletes a record.
upsert<T: Serialize>(&self, sobject: &str, builder: UpsertBuilder, data: &T) -> SfResult<InsertResponse>Inserts or updates based on external ID.
clear_cache(&self)Clears the query cache.
config(&self) -> &ClientConfigReturns the current configuration.
rate_limit_status(&self) -> RateLimitStatusReturns current rate limiter status.
let config = ClientConfig::new(base_url, token)
.with_retry(RetryConfig::new()
.max_retries(5)
.initial_interval(Duration::from_millis(500))
.max_interval(Duration::from_secs(30)))
.with_cache(CacheConfig::new()
.max_capacity(10_000)
.ttl(Duration::from_secs(300)))
.with_rate_limit(RateLimitConfig::new()
.requests_per_second(10)
.burst_size(20));
let client = SalesforceClient::new(config);
let client1 = client.clone();
let client2 = client.clone();
let (accounts, contacts) = tokio::join!(
client1.query::<Account>("SELECT Id FROM Account LIMIT 100"),
client2.query::<Contact>("SELECT Id FROM Contact LIMIT 100")
);
let accounts = accounts?;
let contacts = contacts?;
#[derive(Deserialize)]
struct Contact {
#[serde(rename = "Id")]
id: String,
#[serde(rename = "Account")]
account: Option<AccountRef>,
}
#[derive(Deserialize)]
struct AccountRef {
#[serde(rename = "Name")]
name: String,
}
let contacts: Vec<Contact> = client
.query("SELECT Id, Account.Name FROM Contact")
.await?;
match client.query::<Account>("SELECT Id FROM Account").await {
Ok(accounts) => {
println!("Retrieved {} accounts", accounts.len());
}
Err(SfError::Network(e)) => {
eprintln!("Network error: {}", e);
}
Err(SfError::Serialization(e)) => {
eprintln!("Deserialization error: {}", e);
}
Err(SfError::Api { status, body }) => {
eprintln!("Salesforce API error ({}): {}", status, body);
}
Err(SfError::RateLimit { retry_after }) => {
eprintln!("Rate limit exceeded, retry after {:?}", retry_after);
}
Err(e) => {
eprintln!("Other error: {}", e);
}
}
RetryConfig::new()
.max_retries(3) // Maximum retry attempts
.initial_interval(Duration::from_millis(500)) // Initial backoff
.max_interval(Duration::from_secs(30)) // Maximum backoff
Retryable errors:
CacheConfig::new()
.max_capacity(10_000) // Maximum cached entries
.ttl(Duration::from_secs(300)) // Time to live
.tti(Duration::from_secs(60)) // Time to idle
Cache is automatically invalidated on:
clear_cache() callsRateLimitConfig::new()
.requests_per_second(4) // Conservative default
.burst_size(10) // Burst capacity
Salesforce limits: 100 API calls per 20 seconds per user (default).
All operations return Result<T, SfError> where SfError provides detailed context:
Network - Connection failures, timeoutsSerialization - JSON parsing errorsApi - Non-success HTTP responses with status and bodyAuth - Authentication failuresRateLimit - API quota exceededNotFound - Record not foundInvalidQuery - SOQL syntax errorConfig - Configuration errorCache - Caching errorTimeout - Operation timeoutUse the ? operator for clean error propagation:
async fn sync_accounts() -> Result<(), SfError> {
let accounts = client.query::<Account>("SELECT Id FROM Account").await?;
process_accounts(accounts)?;
Ok(())
}
Query performance (with caching):
Memory usage:
query_paginated for large datasets| Feature | This Library | rustforce | rust_sync_force | swissknife |
|---|---|---|---|---|
| Async/Await | Yes | Yes | No | Yes |
| OAuth Auto-Refresh | Yes | No | No | No |
| Caching | Yes | No | No | No |
| Retry Logic | Yes | No | No | No |
| Rate Limiting | Yes | No | No | No |
| Auto-Pagination | Yes | Manual | Manual | Manual |
| Query Builder | Yes | No | No | No |
| CRUD Operations | Yes | Yes | Yes | Yes |
| Bulk API | Planned | Yes | No | No |
| Tracing | Yes | No | No | No |
| Error Types | 10 | 3 | 2 | Generic |
| Documentation | Extensive | Basic | Minimal | Basic |
| Maintenance | Active (2026) | Stale (2020) | Stale (2021) | Active |
Generic methods with trait bounds ensure type safety at compile time:
pub async fn query<T>(&self, soql: impl AsRef<str>) -> SfResult<Vec<T>>
where
T: DeserializeOwned + Serialize + Clone,
Automatic token refresh via TokenManager with thread-safe RwLock:
Two-tier caching:
Custom implementation (not using backoff crate due to lifetime issues):
Token bucket algorithm via governor crate:
Custom error enum with thiserror:
Complete memory safety without unsafe blocks:
Contributions are welcome. Areas of interest:
Please follow Rust best practices:
Run tests:
cargo test --lib
Run specific test:
cargo test --lib test_name
Run benchmarks:
cargo bench
Dual-licensed under:
You may choose either license for your purposes.
Production dependencies:
Dev dependencies:
For issues, questions, or contributions, please refer to the repository.
Built with modern Rust best practices and inspired by enterprise API client patterns.