| Crates.io | trustformers |
| lib.rs | trustformers |
| version | 0.1.0-alpha.1 |
| created_at | 2025-06-30 12:20:16.173714+00 |
| updated_at | 2025-11-09 10:53:21.074006+00 |
| description | TrustformeRS - Rust port of Hugging Face Transformers |
| homepage | |
| repository | https://github.com/cool-japan/trustformers |
| max_upload_size | |
| id | 1731783 |
| size | 3,893,798 |
Main integration crate providing high-level APIs, pipelines, and Hugging Face Hub integration for the TrustformeRS ecosystem.
This crate serves as the primary entry point for users, offering HuggingFace-compatible APIs for common NLP tasks. It includes comprehensive pipeline implementations, auto model classes, and seamless integration with the Hugging Face Model Hub.
Complete implementations of all major NLP pipelines:
All pipelines support:
Automatic model selection based on task:
use trustformers::pipeline;
// Text classification
let classifier = pipeline("sentiment-analysis")?;
let results = classifier("I love using Rust for ML!")?;
println!("Label: {}, Score: {}", results[0].label, results[0].score);
// Text generation
let generator = pipeline("text-generation")?;
let output = generator("Once upon a time")?;
println!("Generated: {}", output[0].generated_text);
// Question answering
let qa = pipeline("question-answering")?;
let answer = qa(
"What is Rust?",
"Rust is a systems programming language focused on safety."
)?;
println!("Answer: {}", answer.answer);
use trustformers::{
AutoModel, AutoTokenizer,
AutoModelForSequenceClassification,
};
// Load model and tokenizer automatically
let model_name = "bert-base-uncased";
let tokenizer = AutoTokenizer::from_pretrained(model_name)?;
let model = AutoModelForSequenceClassification::from_pretrained(model_name)?;
// Use for inference
let inputs = tokenizer.encode("Hello, world!", None)?;
let outputs = model.forward(&inputs)?;
use trustformers::hub::{Hub, HubConfig};
// Configure hub access
let config = HubConfig {
token: Some("your_token".to_string()),
cache_dir: Some("/path/to/cache".to_string()),
..Default::default()
};
let hub = Hub::new(config)?;
// Download model with progress
let model_path = hub.download_model(
"meta-llama/Llama-2-7b-hf",
Some("main"), // revision
)?;
trustformers/
├── src/
│ ├── pipelines/ # Pipeline implementations
│ │ ├── text_classification.rs
│ │ ├── text_generation.rs
│ │ ├── token_classification.rs
│ │ └── ...
│ ├── auto/ # Auto classes
│ │ ├── model.rs
│ │ ├── tokenizer.rs
│ │ └── config.rs
│ ├── hub/ # Hub integration
│ │ ├── download.rs
│ │ ├── cache.rs
│ │ └── auth.rs
│ ├── generation/ # Generation strategies
│ │ ├── sampling.rs
│ │ ├── beam_search.rs
│ │ └── streaming.rs
│ └── utils/ # Utilities
use trustformers::{pipeline, PipelineConfig};
let config = PipelineConfig {
device: "cuda:0".to_string(),
batch_size: 32,
max_length: 512,
num_threads: 4,
..Default::default()
};
let pipeline = pipeline_with_config("text-generation", config)?;
| Pipeline | Model | Batch Size | Throughput |
|---|---|---|---|
| Text Classification | BERT-base | 32 | 850 samples/s |
| Text Generation | GPT-2 | 1 | 45 tokens/s |
| Question Answering | BERT-base | 16 | 320 QA pairs/s |
| Token Classification | BERT-base | 32 | 750 samples/s |
Benchmarks on NVIDIA RTX 4090
The library supports all models implemented in trustformers-models:
MIT OR Apache-2.0