| Crates.io | carbem |
| lib.rs | carbem |
| version | 0.5.0 |
| created_at | 2025-10-04 14:25:25.383127+00 |
| updated_at | 2025-12-10 15:54:29.555254+00 |
| description | A Rust library for retrieving carbon emission values from cloud providers |
| homepage | |
| repository | https://github.com/jonperron/carbem |
| max_upload_size | |
| id | 1867957 |
| size | 186,565 |
A Rust library for retrieving carbon emission values from cloud providers.
Carbem provides a unified interface for querying carbon emission data from various cloud service providers. This library helps developers build more environmentally conscious applications by making it easy to access and analyze the carbon footprint of cloud infrastructure.
Add this to your Cargo.toml:
[dependencies]
carbem = "0.2.0"
Install from PyPI:
pip install carbem-python
For development setup with maturin:
pip install maturin
maturin develop
For standalone Rust applications, use the builder pattern with environment variables:
use carbem::{CarbemClient, EmissionQuery, TimePeriod};
use chrono::{Utc, Duration};
#[tokio::main]
async fn main() -> carbem::Result<()> {
// Configure client from environment variables
let client = CarbemClient::new()
.with_azure_from_env()?;
// Create a query
let query = EmissionQuery {
provider: "azure".to_string(),
regions: vec!["subscription-id".to_string()],
time_period: TimePeriod {
start: Utc::now() - Duration::days(30),
end: Utc::now(),
},
services: Some(vec!["compute".to_string(), "storage".to_string()]),
resources: None,
};
let emissions = client.query_emissions(&query).await?;
for emission in emissions {
println!("Service: {}, Emissions: {} kg CO2eq",
emission.service.unwrap_or_default(),
emission.emissions_kg_co2eq);
}
Ok(())
}
For Python applications, use the get_emissions_py function:
import carbem
import json
from datetime import datetime, timedelta
# Azure configuration
config = json.dumps({
"access_token": "your-azure-bearer-token"
})
# Query for last 30 days
end_date = datetime.utcnow()
start_date = end_date - timedelta(days=30)
query = json.dumps({
"start_date": start_date.strftime("%Y-%m-%dT%H:%M:%SZ"),
"end_date": end_date.strftime("%Y-%m-%dT%H:%M:%SZ"),
"regions": ["your-subscription-id"],
})
# Get emissions data
result = carbem.get_emissions_py("azure", config, query)
emissions = json.loads(result)
print(f"Found {len(emissions)} emission records")
Create a .env file in your project root:
# Azure Carbon Emissions Configuration
CARBEM_AZURE_ACCESS_TOKEN=your_azure_bearer_token_here
# OR alternatively use:
# AZURE_TOKEN=your_azure_bearer_token_here
CARBEM_AZURE_ACCESS_TOKEN: Azure access tokenAZURE_TOKEN: Alternative Azure access token variableFor Python applications, configuration is passed as JSON strings to the get_emissions_py function. See the Python API Documentation for detailed configuration examples and usage patterns.
The Azure provider requires minimal configuration:
use carbem::AzureConfig;
let config = AzureConfig {
access_token: "your-bearer-token".to_string(),
};
use carbem::{CarbemClient, AzureConfig, EmissionQuery, TimePeriod};
use chrono::{Utc, Duration};
#[tokio::main]
async fn main() -> carbem::Result<()> {
// Create a client and configure Azure provider
let config = AzureConfig {
access_token: "your-bearer-token".to_string(),
};
let client = CarbemClient::new()
.with_azure(config)?;
// Query carbon emissions for the last 30 days
let query = EmissionQuery {
provider: "azure".to_string(),
regions: vec!["subscription-id".to_string()], // Use your subscription IDs
time_period: TimePeriod {
start: Utc::now() - Duration::days(30),
end: Utc::now(),
},
services: None,
resources: None,
};
let emissions = client.query_emissions(&query).await?;
for emission in emissions {
println!("Date: {}, Region: {}, Emissions: {} kg CO2eq",
emission.time_period.start.format("%Y-%m-%d"),
emission.region,
emission.emissions_kg_co2eq);
}
Ok(())
}
Google Cloud Platform is not supported at the moment (October 11th 2025). Data are available only after exporting them to BigQuery as discussed in this page. Thus, one will need to make a query to the BigQuery API, which makes a standard implementation not possible at the moment.
AWS is not supported at the moment (October 11th 2025). Data are available in S3 buckets as discussed in this page. An endpoint existed but was discontinued on July 23rd 2025 (ref).
The library includes a comprehensive test suite:
# Run all tests
cargo test
# Run specific Azure provider tests
cargo test providers::azure
# Run with output
cargo test -- --nocapture
Test coverage includes:
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
This project aims to support sustainability efforts in cloud computing by making carbon emission data more accessible to developers and organizations.