| Crates.io | kalax |
| lib.rs | kalax |
| version | 0.1.1 |
| created_at | 2026-01-12 11:24:34.829895+00 |
| updated_at | 2026-01-12 11:24:34.829895+00 |
| description | High-performance time series feature extraction library |
| homepage | https://github.com/lucidfrontier45/kalax |
| repository | https://github.com/lucidfrontier45/kalax |
| max_upload_size | |
| id | 2037549 |
| size | 36,792 |
Kalax is a high-performance Rust library for Time Series Feature Extraction, designed to extract meaningful statistical and structural features from time series data.
Kalax is a portmanteau of two Sanskrit concepts, representing the essence of time-series analysis:
The terminal "x" stands for Extraction. Together, Kalax signifies the "Signs of Time," reflecting our mission to distill raw temporal data into meaningful, high-performance features using Rust.
&[f64] slices with parallel processingAdd Kalax to your Cargo.toml:
[dependencies]
kalax = "0.1.0"
Kalax provides two API styles: functional and object-oriented.
Simple function calls for individual features:
use kalax::features::minimal::{mean, variance, standard_deviation};
let time_series = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let mean_value = mean(&time_series);
let variance_value = variance(&time_series);
let std_dev = standard_deviation(&time_series);
println!("Mean: {}", mean_value); // 3.0
println!("Variance: {}", variance_value); // 2.0
println!("Std Dev: {}", std_dev); // ~1.414
Use the FeatureFunction trait for more structured code:
use kalax::features::{minimal::Mean, common::FeatureFunction};
let time_series = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let result = Mean::DEFAULT.apply(&time_series);
println!("{}: {}", result[0].name, result[0].value); // mean: 3.0
Use MinimalFeatureSet to extract all supported features at once:
use kalax::features::{minimal::MinimalFeatureSet, common::FeatureFunction};
let time_series = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let features = MinimalFeatureSet::new().apply(&time_series);
for feature in features {
println!("{}: {}", feature.name, feature.value);
}
// Output: absolute_maximum, mean, median, variance, standard_deviation,
// length, maximum, minimum, root_mean_square, sum_values
Process multiple time series efficiently using the extractor:
use std::collections::HashMap;
use kalax::extract_features;
// Prepare data as a vector of HashMaps (column name -> time series values)
let data = vec![
HashMap::from([
("sensor1".to_string(), vec![1.0, 2.0, 3.0]),
("sensor2".to_string(), vec![4.0, 5.0, 6.0]),
]),
HashMap::from([
("sensor1".to_string(), vec![7.0, 8.0, 9.0]),
("sensor2".to_string(), vec![10.0, 11.0, 12.0]),
]),
];
// Extract features in parallel
let results = extract_features(&data);
// results[0]["sensor1"] contains features for sensor1 from the first series
// results[1]["sensor2"] contains features for sensor2 from the second series
All features are available through both the functional and OOP APIs.
Import and call functions directly:
use kalax::features::minimal::{mean, median, variance};
let m = mean(&series);
Use feature structs and the FeatureFunction trait:
use kalax::features::{minimal::Mean, common::FeatureFunction};
let result = Mean::DEFAULT.apply(&series);
Kalax is designed for high-performance time series analysis:
&[f64] slices without copying dataRun the test suite:
# Run all tests
cargo test
# Run specific test
cargo test test_minimal_extractor
# Show test output
cargo test -- --nocapture
cargo build # Debug build
cargo build --release # Release build
cargo check # Quick compilation check
cargo clippy # Run linter
cargo fmt # Format code
cargo clippy --fix # Auto-fix linter warnings
cargo doc # Generate documentation
cargo doc --open # Generate and open in browser
Kalax provides two API styles to accommodate different use cases:
Both APIs provide identical performance; the choice is primarily about code organization and developer preference.
This project is licensed under the LICENSE file.
Contributions are welcome! Please ensure:
cargo clippyKalax provides a subset of features comparable to Python's tsfresh library, with focus on core statistical features. Key advantages: