| Crates.io | edgefirst-cli |
| lib.rs | edgefirst-cli |
| version | 2.8.0 |
| created_at | 2025-10-16 03:36:53.727511+00 |
| updated_at | 2026-01-04 22:03:14.950091+00 |
| description | EdgeFirst Client Library and CLI |
| homepage | https://edgefirst.ai |
| repository | https://github.com/EdgeFirstAI/client |
| max_upload_size | |
| id | 1885342 |
| size | 486,505 |
EdgeFirst Studio Client is the official command-line application and library for EdgeFirst Studio - the MLOps platform for 3D visual and 4D spatial perception AI. Available for Rust, Python, Android (Kotlin), and iOS/macOS (Swift). Automate dataset management, annotation workflows, model training, validation, and deployment for off-road vehicles, robotics, construction equipment, and industrial applications.
EdgeFirst Client provides seamless programmatic access to EdgeFirst Studio's comprehensive MLOps capabilities. Whether you're integrating Studio into your CI/CD pipeline, building custom training workflows, or automating data processing systems, EdgeFirst Client delivers the production-grade reliability you need.
Trusted by EdgeFirst Studio: This client library powers EdgeFirst Studio's internal training and validation services, providing a battle-tested foundation for production workloads.
--autolabel) and depth map generation (--autodepth)cargo install edgefirst-cli
pip install edgefirst-client
Download the SDK packages from GitHub Releases:
edgefirst-client-android-{version}.zip - Kotlin bindings with JNI librariesedgefirst-client-swift-{version}.zip - Swift bindings with XCFrameworkSee platform-specific documentation for integration instructions:
git clone https://github.com/EdgeFirstAI/edgefirst-client
cd edgefirst-client
cargo build --release
# Login (stores token locally for 7 days)
edgefirst-client login
# View your organization info
edgefirst-client organization
# Use environment variables (recommended for CI/CD)
export STUDIO_TOKEN="your-token"
edgefirst-client organization
# List projects and datasets
edgefirst-client projects
edgefirst-client datasets --project-id <PROJECT_ID>
# Download dataset with images
edgefirst-client download-dataset <DATASET_ID> --types image --output ./data
# Download annotations in Arrow format (EdgeFirst Dataset Format)
edgefirst-client download-annotations <ANNOTATION_SET_ID> \
--types box2d,box3d,segmentation \
--output annotations.arrow
# Upload samples to dataset
edgefirst-client upload-dataset <DATASET_ID> \
--annotations annotations.arrow \
--annotation-set-id <ANNOTATION_SET_ID> \
--images ./images/
For complete upload format specifications, see EdgeFirst Dataset Format.
# List training experiments
edgefirst-client experiments --project-id <PROJECT_ID>
# Monitor training sessions
edgefirst-client training-sessions --experiment-id <EXP_ID>
# Get training session details with artifacts
edgefirst-client training-session <SESSION_ID> --artifacts
# Download trained model
edgefirst-client download-artifact <SESSION_ID> modelpack.onnx --output ./models/
Snapshots preserve complete copies of sensor data, datasets, or directories for versioning and backup. Restore them with optional automatic annotation (AGTG) and depth map generation.
# List all snapshots
edgefirst-client snapshots
# Create snapshot from MCAP file
edgefirst-client create-snapshot <DATASET_ID> recording.mcap
# Create snapshot from directory
edgefirst-client create-snapshot <DATASET_ID> ./sensor_data/
# Download snapshot
edgefirst-client download-snapshot <SNAPSHOT_ID> ./snapshot_backup/
# Restore snapshot to new dataset
edgefirst-client restore-snapshot <SNAPSHOT_ID>
# Restore with automatic annotation (AGTG)
edgefirst-client restore-snapshot <SNAPSHOT_ID> --autolabel
# Restore with AGTG and depth map generation
edgefirst-client restore-snapshot <SNAPSHOT_ID> --autolabel --autodepth
# Delete snapshot
edgefirst-client delete-snapshot <SNAPSHOT_ID>
For detailed snapshot documentation, see the EdgeFirst Studio Snapshots Guide.
EdgeFirst Client provides tools for working with the EdgeFirst Dataset Format - an Arrow-based format optimized for 3D perception AI workflows.
The create-snapshot command intelligently handles multiple input types:
dataset.arrow manifest and dataset.zip, then uploadsdataset.zip or dataset/ folder for images1. Simple folder of images (CLI handles conversion automatically):
my_images/
βββ image001.jpg
βββ image002.jpg
βββ image003.png
2. Sequence-based dataset (video frames with temporal ordering):
my_dataset.arrow # Annotation manifest
my_dataset/ # Sensor container (or my_dataset.zip)
βββ sequence_name/
βββ sequence_name_001.camera.jpeg
βββ sequence_name_002.camera.jpeg
βββ sequence_name_003.camera.jpeg
3. Mixed dataset (sequences + standalone images):
my_dataset.arrow
my_dataset/
βββ video_sequence/
β βββ video_sequence_*.camera.jpeg
βββ standalone_image1.jpg
βββ standalone_image2.png
# Upload a folder of images (auto-generates Arrow manifest and ZIP)
edgefirst-client create-snapshot ./my_images/
# Upload using existing Arrow manifest (auto-discovers dataset.zip or dataset/)
edgefirst-client create-snapshot ./my_dataset/my_dataset.arrow
# Upload complete dataset directory
edgefirst-client create-snapshot ./my_dataset/
# Create snapshot from server-side dataset (with default annotation set)
edgefirst-client create-snapshot ds-12345
# Create snapshot from server-side dataset with specific annotation set
edgefirst-client create-snapshot ds-12345 --annotation-set as-67890
# Monitor server-side snapshot creation progress
edgefirst-client create-snapshot ds-12345 --monitor
# Generate Arrow manifest from images (without uploading)
edgefirst-client generate-arrow ./images --output dataset.arrow
# Generate with sequence detection for video frames
edgefirst-client generate-arrow ./frames -o video.arrow --detect-sequences
# Validate dataset structure before upload
edgefirst-client validate-snapshot ./my_dataset
edgefirst-client validate-snapshot ./my_dataset --verbose
--detect-sequences)The --detect-sequences flag enables automatic detection of video frame sequences based on filename patterns. When enabled, the CLI parses filenames to identify temporal ordering.
How it works:
{name}_{frame}.{ext} pattern (e.g., video_001.jpg, camera_042.png)Detection behavior:
| Input | --detect-sequences OFF |
--detect-sequences ON |
|---|---|---|
image.jpg |
name=image, frame=null |
name=image, frame=null |
seq_001.jpg |
name=seq_001, frame=null |
name=seq, frame=1 |
camera_042.camera.jpeg |
name=camera_042, frame=null |
name=camera, frame=42 |
video/video_100.jpg |
name=video_100, frame=null |
name=video, frame=100 |
Supported structures:
sequence_name/sequence_name_001.jpg (frames in subdirectories)sequence_name_001.jpg (frames at root level)β οΈ False positive considerations:
Files with names like model_v2.jpg or sample_2024.png may be incorrectly detected as sequences when --detect-sequences is enabled. If your dataset contains non-sequence files with _number suffixes, consider:
_N pattern (e.g., model-v2.jpg)--detect-sequences and manually organizing sequences into subdirectoriesImages: .jpg, .jpeg, .png, .camera.jpeg, .camera.png
Point Clouds: .lidar.pcd (LiDAR), .radar.pcd (Radar)
Depth Maps: .depth.png (16-bit PNG)
Radar Cubes: .radar.png (16-bit PNG with embedded dimension metadata)
See DATASET_FORMAT.md for technical details on radar cube encoding.
The create-snapshot command uploads datasets with or without annotations:
When uploading unannotated datasets, EdgeFirst Studio can populate annotations via:
restore-snapshot --autolabel (MCAP snapshots only)Note: The CLI does not currently parse annotations from other formats (e.g., COCO, YOLO). To upload pre-annotated datasets from these formats, first convert them to EdgeFirst Dataset Format using the annotation schema in DATASET_FORMAT.md.
use edgefirst_client::format::{
generate_arrow_from_folder, validate_dataset_structure, ValidationIssue
};
use std::path::PathBuf;
// Generate Arrow manifest from images
let images_dir = PathBuf::from("./images");
let output = PathBuf::from("./dataset.arrow");
let count = generate_arrow_from_folder(&images_dir, &output, true)?;
println!("Generated manifest for {} images", count);
// Validate dataset structure before upload
let issues = validate_dataset_structure(&PathBuf::from("./my_dataset"))?;
for issue in &issues {
match issue {
ValidationIssue::MissingArrowFile { .. } => eprintln!("Error: {}", issue),
ValidationIssue::MissingSensorContainer { .. } => eprintln!("Error: {}", issue),
_ => println!("Warning: {}", issue),
}
}
from pathlib import Path
from edgefirst_client import Client
# Create snapshot from local folder (auto-generates manifest)
client = Client().with_token_path(None)
snapshot = client.create_snapshot("./my_images/")
print(f"Created snapshot: {snapshot.id()}")
# Create snapshot from server-side dataset
result = client.create_snapshot_from_dataset("ds-12345", "My backup")
print(f"Snapshot: {result.id}, Task: {result.task_id}")
# Create snapshot with explicit annotation set
result = client.create_snapshot_from_dataset(
"ds-12345", "Backup with annotations", "as-67890"
)
For complete format specification, see EdgeFirst Dataset Format Documentation or DATASET_FORMAT.md.
use edgefirst_client::{Client, TrainingSessionID};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create client and authenticate
let client = Client::new()?;
let client = client.with_login("email@example.com", "password").await?;
// List projects
let projects = client.projects(None).await?;
for project in projects {
println!("Project: {} ({})", project.name(), project.id());
// List datasets for this project
let datasets = client.datasets(project.id(), None).await?;
for dataset in datasets {
println!(" Dataset: {}", dataset.name());
}
}
// Publish training metrics (used by trainers/validators)
// Note: Replace with your actual training session ID
let session_id = TrainingSessionID::from(12345);
use std::collections::HashMap;
let session = client.training_session(session_id).await?;
let mut metrics = HashMap::new();
metrics.insert("loss".to_string(), 0.123.into());
metrics.insert("accuracy".to_string(), 0.956.into());
session.set_metrics(&client, metrics).await?;
Ok(())
}
from edgefirst_client import Client
# Create client and authenticate
client = Client()
client = client.with_login("email@example.com", "password")
# List projects and datasets
projects = client.projects()
for project in projects:
print(f"Project: {project.name} ({project.id})")
datasets = client.datasets(project.id)
for dataset in datasets:
print(f" Dataset: {dataset.name}")
# Publish validation metrics (used by validators)
# Note: Replace with your actual validation session ID
session = client.validation_session("vs-12345")
metrics = {
"mAP": 0.87,
"precision": 0.92,
"recall": 0.85
}
session.set_metrics(client, metrics)
EdgeFirst Client is a REST API client built with:
This client is the official API gateway for EdgeFirst Studio - the complete MLOps platform for 3D visual and 4D spatial perception AI:
π EdgeFirst Studio Features:
π° Free Tier Available:
EdgeFirst Client works seamlessly with EdgeFirst Modules:
Au-Zone Technologies offers comprehensive support for production deployments:
π§ Contact: support@au-zone.com π Learn more: au-zone.com
Contributions are welcome! Please:
Using AI Coding Agents? See AGENTS.md for project conventions, build commands, and pre-commit requirements.
This project uses SonarCloud for automated code quality analysis. Contributors can download findings and use GitHub Copilot to help fix issues:
python3 sonar.py --branch main --output sonar-issues.json --verbose
See CONTRIBUTING.md for details.
For security vulnerabilities, please use our responsible disclosure process:
See SECURITY.md for complete security policy and best practices.
Licensed under the Apache License 2.0 - see LICENSE for details.
Copyright 2025 Au-Zone Technologies
See NOTICE for third-party software attributions included in binary releases.
π Ready to streamline your perception AI workflows?
Try EdgeFirst Studio Free - No credit card required β’ 100,000 images β’ 10 hours training/month