| Crates.io | liquid-edge |
| lib.rs | liquid-edge |
| version | 0.2.3 |
| created_at | 2025-08-03 11:11:53.058909+00 |
| updated_at | 2025-09-07 22:57:22.934251+00 |
| description | Minimal edge inference runtime for LLMs |
| homepage | |
| repository | https://github.com/liquidos-ai/AutoAgents |
| max_upload_size | |
| id | 1779618 |
| size | 77,496 |
Liquid Edge is a production-ready inference runtime designed specifically for edge computing environments. It provides high-performance LLM inference with multiple backend support, comprehensive tokenization capabilities, and optimized memory management.
🎯 Multiple Backends: ONNX Runtime support with more backends planned
⚡ High Performance: Optimized for edge devices with minimal latency
🔄 Async Support: Full async/await support for non-blocking inference
💬 Chat Templates: Jinja2 template support for conversational AI
📊 Comprehensive Monitoring: Built-in metrics and performance tracking
🛡️ Production Ready: Robust error handling and logging
🔧 Flexible Configuration: Feature flags for customized builds
pipx install "optimum[onnxruntime]"
optimum-cli export onnx \ ✔ │ 7s │ rust-onnx-chat
--model squeeze-ai-lab/TinyAgent-1.1B \
--task text-generation ./models/tinyagen
models/my-model/
├── model.onnx # ONNX model file
├── tokenizer.json # HuggingFace tokenizer
├── config.json # Model configuration
├── tokenizer_config.json # Tokenizer configuration
├── special_tokens_map.json # Special tokens mapping
└── chat_template.jinja # Chat template (optional)