| Crates.io | oxirs |
| lib.rs | oxirs |
| version | 0.1.0 |
| created_at | 2025-06-30 11:57:10.678549+00 |
| updated_at | 2026-01-20 22:31:03.079304+00 |
| description | Command-line interface for OxiRS - import, export, migration, and benchmarking tools |
| homepage | https://github.com/cool-japan/oxirs |
| repository | https://github.com/cool-japan/oxirs |
| max_upload_size | |
| id | 1731755 |
| size | 2,682,802 |
Command-line interface for OxiRS semantic web operations
Status: Production Release (v0.1.0) - Released January 7, 2026
⚡ Production-Ready: APIs are stable and tested. Ready for production use with comprehensive documentation.
oxirs is the unified command-line tool for the OxiRS ecosystem, providing comprehensive functionality for RDF data management, SPARQL operations, server administration, and development workflows. It's designed to be the Swiss Army knife for semantic web developers and data engineers working with knowledge graphs and semantic data.
# Install the latest release
cargo install oxirs --version 0.1.0
# Or install with all optional features
cargo install oxirs --version 0.1.0 --features all-features
git clone https://github.com/cool-japan/oxirs
cd oxirs/tools/oxirs
cargo install --path .
# Or with all features
cargo install --path . --features all-features
Generate shell completion scripts for your shell:
# Bash
oxirs --completion bash > ~/.local/share/bash-completion/completions/oxirs
# Zsh
oxirs --completion zsh > ~/.zfunc/_oxirs
# Fish
oxirs --completion fish > ~/.config/fish/completions/oxirs.fish
# PowerShell
oxirs --completion powershell > oxirs.ps1
Dataset names must follow these rules:
.oxirs)✅ Valid: mydata, my_dataset, test-data-2024
❌ Invalid: dataset.oxirs, my/data, data.ttl
# Initialize a new dataset
oxirs init mydata
# Import RDF data into the dataset
oxirs import mydata data.ttl --format turtle
# Query the data
oxirs query mydata "SELECT * WHERE { ?s ?p ?o } LIMIT 10"
# Start a SPARQL server
oxirs serve mydata --port 3030
# Export to different format
oxirs export mydata output.jsonld --format json-ld
# Start interactive REPL
oxirs interactive --dataset mydata
oxirs> SELECT ?person ?name WHERE { ?person foaf:name ?name } LIMIT 5
┌─────────────────────────────────────┬──────────────┐
│ person │ name │
├─────────────────────────────────────┼──────────────┤
│ http://example.org/alice │ "Alice" │
│ http://example.org/bob │ "Bob" │
└─────────────────────────────────────┴──────────────┘
oxirs> .schema foaf:Person
Class: foaf:Person
Properties:
- foaf:name (string, required)
- foaf:age (integer, optional)
- foaf:mbox (string, optional)
oxirs> .exit
# Initialize dataset first
oxirs init mydata
# Import single file (dataset name must be alphanumeric, _, - only)
oxirs import mydata data.ttl --format turtle
# Import with named graph
oxirs import mydata data.ttl --format turtle --graph http://example.org/graph1
# Import N-Triples
oxirs import mydata data.nt --format ntriples
# Import RDF/XML
oxirs import mydata data.rdf --format rdfxml
# Import JSON-LD
oxirs import mydata data.jsonld --format jsonld
# Export entire dataset
oxirs export mydata export.ttl --format turtle
# Export specific graph
oxirs export mydata output.jsonld --format json-ld --graph http://example.org/graph1
# Export to N-Triples
oxirs export mydata output.nt --format ntriples
# Export to RDF/XML
oxirs export mydata output.rdf --format rdfxml
# Validate RDF syntax
oxirs rdfparse data.ttl --format turtle
# SHACL validation
oxirs shacl --dataset mydata --shapes shapes.ttl --format text
# ShEx validation
oxirs shex --dataset mydata --schema schema.shex --format text
# Run SPARQL query
oxirs query mydata "SELECT * WHERE { ?s ?p ?o } LIMIT 10"
# Run query from file
oxirs query mydata query.sparql --file
# Output formats: table, json, csv, tsv
oxirs query mydata query.sparql --file --output json
# Advanced query with arq tool
oxirs arq --dataset mydata --query "SELECT * WHERE { ?s ?p ?o }" --results table
# Parse and validate SPARQL query
oxirs qparse query.sparql --print-ast
# Show query algebra
oxirs qparse query.sparql --print-algebra
# Parse SPARQL update
oxirs uparse update.sparql --print-ast
# Query optimization analysis (PostgreSQL EXPLAIN-style)
oxirs explain mydata "SELECT * WHERE { ?s ?p ?o } LIMIT 10"
oxirs explain mydata query.sparql --file --mode analyze
oxirs explain mydata query.sparql --file --mode full
# Shows: query structure, complexity score, optimization hints
# List all available query templates
oxirs template list
# Filter by category
oxirs template list --category basic
oxirs template list --category aggregation
# Show template details
oxirs template show select-by-type
# Render query from template with parameters
oxirs template render select-by-type \
--param type_iri=http://xmlns.com/foaf/0.1/Person \
--param limit=50
# Available templates:
# Basic: select-all, select-by-type, select-with-filter, ask-exists
# Advanced: construct-graph
# Aggregation: count-instances, group-by-count
# PropertyPaths: transitive-closure
# Federation: federated-query
# Analytics: statistics-summary
# List recent queries (automatically tracked)
oxirs history list
oxirs history list --limit 20 --dataset mydata
# Show full query details
oxirs history show 1
# Re-execute a previous query
oxirs history replay 5
oxirs history replay 5 --output json
# Search query history
oxirs history search "SELECT"
oxirs history search "foaf:Person"
# View history statistics
oxirs history stats
# Clear history
oxirs history clear
# History tracks: dataset, query text, execution time,
# result count, success/failure, timestamps
# Stored in: ~/.local/share/oxirs/query_history.json
# Start SPARQL server with configuration file
oxirs serve mydata/oxirs.toml --port 3030
# With GraphQL support enabled
oxirs serve mydata/oxirs.toml --port 3030 --graphql
# Specify host and port
oxirs serve mydata/oxirs.toml --host 0.0.0.0 --port 8080
# Check server status
oxirs admin status --server http://localhost:3030
# Upload data to running server
oxirs admin upload --server http://localhost:3030 --file new-data.ttl
# Backup dataset
oxirs admin backup --server http://localhost:3030 --output backup.tar.gz
# View server metrics
oxirs admin metrics --server http://localhost:3030 --format prometheus
# Generate schema from data
oxirs schema generate --dataset mydata.oxirs --output schema.rdfs
# Validate against schema
oxirs schema validate --dataset mydata.oxirs --schema schema.rdfs
# Compare schemas
oxirs schema diff --schema1 old-schema.rdfs --schema2 new-schema.rdfs
# Convert schema formats
oxirs schema convert --input schema.owl --output schema.shacl --format shacl
# Optimize dataset
oxirs optimize --dataset mydata.oxirs --output optimized.oxirs
# Analyze dataset statistics
oxirs analyze --dataset mydata.oxirs --output stats.json
# Generate indices
oxirs index --dataset mydata.oxirs --properties foaf:name,dc:title
# Compress dataset
oxirs compress --dataset mydata.oxirs --algorithm lz4 --output compressed.oxirs
# Generate test dataset
oxirs benchmark generate --template university --size 10000 --output test-data.ttl
# Generate synthetic data
oxirs benchmark synthetic --schema schema.rdfs --triples 1000000 --output synthetic.nq
# Generate benchmark queries
oxirs benchmark queries --dataset mydata.oxirs --count 100 --complexity mixed --output queries/
# Run benchmarks
oxirs benchmark run --dataset mydata.oxirs --queries queries/ --report benchmark-report.html
# Compare performance
oxirs benchmark compare --baseline baseline-results.json --current current-results.json
# Stress testing
oxirs benchmark stress --endpoint http://localhost:3030 --duration 60s --concurrent 10
# Convert between RDF formats
oxirs convert --input data.rdf --output data.ttl --from rdfxml --to turtle
# Batch conversion
oxirs convert --directory rdf-files/ --from rdfxml --to jsonld --output converted/
# Streaming conversion for large files
oxirs convert --input large-file.nt --output large-file.ttl --stream --chunk-size 10000
# Migrate from older OxiRS version
oxirs migrate --input old-dataset.oxirs --output new-dataset.oxirs --from-version 0.1.0
# Migrate from other triple stores
oxirs migrate --input virtuoso-dump.nq --format nquads --output migrated.oxirs --optimize
# Migrate with transformation
oxirs migrate --input data.ttl --transform-rules rules.sparql --output transformed.oxirs
# Initialize configuration
oxirs config init --template server --output oxirs.yaml
# Validate configuration
oxirs config validate --file oxirs.yaml
# Show current configuration
oxirs config show --format yaml
# Set configuration values
oxirs config set server.port 3030
oxirs config set auth.enabled true
# Setup development environment
oxirs init --project my-semantic-app --template basic
# Install dependencies
oxirs deps install --file requirements.yaml
# Setup CI/CD templates
oxirs init --template ci-cd --output .github/workflows/
When you run oxirs init mydata, it creates a configuration file at mydata/oxirs.toml:
# OxiRS Configuration
# Generated by oxirs init
[general]
default_format = "turtle"
[server]
port = 3030
host = "localhost"
enable_cors = true
enable_graphql = false
[datasets.mydata]
name = "mydata"
location = "."
dataset_type = "tdb2"
read_only = false
enable_reasoning = false
enable_validation = false
enable_text_search = false
enable_vector_search = false
general.default_format: Default RDF serialization formatserver.port: HTTP server portserver.host: Server bind addressserver.enable_graphql: Enable GraphQL endpointdatasets.{name}.location: Storage path (. means dataset directory itself)datasets.{name}.dataset_type: Storage backend (tdb2 or memory)datasets.{name}.read_only: Prevent modificationsenable_reasoning, enable_validation, enable_text_search, enable_vector_search# Use specific profile
oxirs --profile production query --endpoint production --query query.sparql
# Override global settings
oxirs --verbose query --dataset mydata.oxirs --format json --query query.sparql
# Use configuration file
oxirs --config custom-config.yaml serve --dataset mydata.oxirs
# Bash completion
eval "$(oxirs completion bash)"
# Pipeline operations
oxirs import --file data.ttl --dataset temp.oxirs | \
oxirs validate --shapes shapes.ttl | \
oxirs optimize --output optimized.oxirs
# Batch processing
find . -name "*.ttl" -exec oxirs import --file {} --dataset combined.oxirs \;
# Install plugin
oxirs plugin install oxirs-geospatial
# List plugins
oxirs plugin list
# Run plugin command
oxirs geo index --dataset spatial-data.oxirs --property geo:hasGeometry
# Integration with Git
oxirs export --dataset mydata.oxirs --format turtle | git diff --no-index data.ttl -
# Integration with Apache Jena
oxirs export --dataset mydata.oxirs --format ntriples | riot --formatted=turtle
# Integration with RDFLib
oxirs query --dataset mydata.oxirs --query query.sparql --format json | python process.py
#!/bin/bash
# data-pipeline.sh
# Download and import multiple datasets
oxirs import --url https://dbpedia.org/dataset.ttl --dataset dbpedia.oxirs
oxirs import --url https://wikidata.org/dataset.ttl --dataset wikidata.oxirs
# Merge datasets
oxirs merge --datasets dbpedia.oxirs,wikidata.oxirs --output merged.oxirs
# Validate merged data
oxirs validate --dataset merged.oxirs --shapes validation-shapes.ttl
# Generate optimized indices
oxirs index --dataset merged.oxirs --properties rdfs:label,skos:prefLabel
# Start production server
oxirs serve --dataset merged.oxirs --config production.yaml --daemon
# Create new project
oxirs init --project semantic-app --template web-app
cd semantic-app/
# Import development data
oxirs import --file dev-data.ttl --dataset dev.oxirs
# Start development server with hot reload
oxirs serve --dataset dev.oxirs --dev --reload
# Run tests
oxirs test --dataset dev.oxirs --test-suite tests/
# Deploy to staging
oxirs deploy --dataset dev.oxirs --target staging --config deploy.yaml
| Operation | Dataset Size | Time | Memory |
|---|---|---|---|
| Import (Turtle) | 1M triples | 15s | 120MB |
| Export (JSON-LD) | 1M triples | 12s | 85MB |
| Query (simple) | 10M triples | 50ms | 45MB |
| Query (complex) | 10M triples | 300ms | 180MB |
| Server startup | 10M triples | 2s | 200MB |
# Use streaming for large files
oxirs import --file large-dataset.nt --stream --chunk-size 100000
# Enable parallel processing
oxirs export --dataset large.oxirs --parallel --workers 8
# Use binary format for faster loading
oxirs convert --input data.ttl --output data.oxirs --optimize
# Compress datasets
oxirs compress --dataset data.oxirs --algorithm zstd --level 3
# Debug mode
oxirs --debug query --dataset mydata.oxirs --query query.sparql
# Verbose output
oxirs --verbose import --file problematic-data.ttl
# Check dataset integrity
oxirs check --dataset mydata.oxirs --repair
# Memory profiling
oxirs --profile-memory query --dataset large.oxirs --query complex-query.sparql
# Recover corrupted dataset
oxirs recover --dataset corrupted.oxirs --output recovered.oxirs
# Validate and repair
oxirs validate --dataset mydata.oxirs --repair --backup
# Restore from backup
oxirs restore --backup backup.tar.gz --output restored.oxirs
oxirs-fuseki: SPARQL HTTP serveroxirs-chat: AI-powered chat interfaceoxirs-workbench: Visual RDF editorLicensed under either of:
at your option.
# Quick reference for common tasks
# Data Operations
oxirs init mydata # Create new dataset
oxirs import mydata file.ttl -f turtle # Import data
oxirs export mydata output.nq -f nquads # Export data
oxirs query mydata "SELECT * WHERE {?s ?p ?o}" # Query data
# Server Operations
oxirs serve mydata --port 3030 # Start server
oxirs serve mydata --graphql # With GraphQL
# Format Conversion
oxirs migrate --source data.ttl --target data.nt --from turtle --to ntriples
# Validation
oxirs rdfparse file.ttl -f turtle # Validate syntax
# Analysis
oxirs explain mydata query.sparql --file # Query analysis
oxirs tdbstats mydata --detailed # Dataset statistics
For Large Datasets (>1M triples):
oxirs batch import --dataset mydata --files *.nt --parallel 8oxirs tdbloader mydata *.nt --progress --statsoxirs export mydata output.nq --format nquads | gzip > output.nq.gzFor Query Performance:
oxirs explain mydata query.sparql --file --mode full| Issue | Solution |
|---|---|
| "Dataset not found" | Run oxirs init <name> first to create the dataset |
| "Format not recognized" | Specify format explicitly with --format flag |
| "Permission denied" | Check directory permissions with chmod 755 <dir> |
| "Port already in use" | Use different port with --port <num> |
| "Out of memory" | Use streaming operations or increase batch size |
| "Invalid SPARQL syntax" | Use oxirs qparse to validate query syntax |
Debug Mode:
# Enable verbose logging
oxirs --verbose query mydata "SELECT * WHERE {?s ?p ?o}"
# Debug specific modules
RUST_LOG=oxirs_core=debug,oxirs_arq=trace oxirs query mydata query.sparql
OxiRS CLI v0.1.0 - Production-ready command-line interface for semantic web operations