| Crates.io | mecha10-nodes-image-classifier |
| lib.rs | mecha10-nodes-image-classifier |
| version | 0.1.46 |
| created_at | 2025-11-24 19:28:59.866914+00 |
| updated_at | 2026-01-22 18:30:09.003819+00 |
| description | Image classification node using ONNX models from catalog |
| homepage | |
| repository | https://github.com/mecha-industries/mecha10 |
| max_upload_size | |
| id | 1948544 |
| size | 114,007 |
Hardware-agnostic image classification using ONNX models from the model catalog.
Control: /vision/classification/control (ClassificationControl - triggers)
Input: /robot/sensors/camera/rgb (CameraImage - processed when enabled)
Output: /vision/classification (Classification - results)
Hardware Sources (all compatible!):
simulation-bridge publishes camera datacamera-driver publishes camera datafake-camera publishes synthetic dataThe node starts in IDLE mode to conserve CPU. Trigger classification via control topic:
Classify one frame only - perfect for button clicks:
redis-cli PUBLISH "/vision/classification/control" '{"command":"single_shot"}'
Start continuous classification at configured rate:
# Start at default rate (from config)
redis-cli PUBLISH "/vision/classification/control" '{"command":"start"}'
# Start at custom rate
redis-cli PUBLISH "/vision/classification/control" '{"command":"start","rate_hz":5}'
# Stop continuous mode
redis-cli PUBLISH "/vision/classification/control" '{"command":"stop"}'
// One-shot classification
ctx.publish("/vision/classification/control", ClassificationControl::SingleShot).await?;
// Wait for result
let result = ctx.receive::<Classification>("/vision/classification").await?;
if result.confidence > 0.8 {
// High confidence - take action
}
Click "Classify" button → sends single_shot command → shows result
File: config/image_classifier.toml
[model]
name = "mobilenet-v2" # From catalog
[input]
topic = "/robot/sensors/camera/rgb"
processing_rate_hz = 10 # Process 10 frames/second
[output]
topic = "/vision/classification"
top_k = 5 # Return top 5 predictions
# Download recommended classification model
mecha10 models pull mobilenet-v2
# Or use setup to get all recommended models
mecha10 setup
# Standalone
cargo run -p mecha10-nodes-image-classifier
# Or via mecha10 CLI
mecha10 dev --node image-classifier
# Watch classification results
mecha10 topics watch /vision/classification
{
"width": 640,
"height": 480,
"format": "jpeg",
"encoding": "base64",
"data": "base64_encoded_image_data...",
"timestamp": 1699459200000
}
{
"class": "golden_retriever",
"class_id": 207,
"confidence": 0.92,
"top_k": [
{ "class": "golden_retriever", "class_id": 207, "confidence": 0.92 },
{ "class": "Labrador_retriever", "class_id": 208, "confidence": 0.05 },
{ "class": "cocker_spaniel", "class_id": 219, "confidence": 0.02 },
{ "class": "Irish_setter", "class_id": 172, "confidence": 0.01 },
{ "class": "English_setter", "class_id": 212, "confidence": 0.003 }
],
"timestamp": 1699459200000
}
Supported models from catalog:
mobilenet-v2 - Fast, 2.5MB (recommended)resnet18 - More accurate, 44MBmodel.pathTarget (MacBook Pro M1):
// Subscribe to classifications in your behavior tree
ctx.subscribe("/vision/classification").await?;
// Use classifications for decision making
if classification.class == "person" && classification.confidence > 0.8 {
// Execute person-following behavior
}
Build:
cargo build -p mecha10-nodes-image-classifier
Test:
cargo test -p mecha10-nodes-image-classifier