| Crates.io | portalis-nemo-bridge |
| lib.rs | portalis-nemo-bridge |
| version | 0.1.0 |
| created_at | 2025-10-07 00:56:41.783053+00 |
| updated_at | 2025-10-07 00:56:41.783053+00 |
| description | NVIDIA NeMo LLM integration for AI-powered translation |
| homepage | https://portalis.dev |
| repository | https://github.com/portalis/portalis |
| max_upload_size | |
| id | 1871049 |
| size | 91,751 |
Enterprise-Grade Code Translation Powered by NVIDIA AI Infrastructure
PORTALIS is a production-ready platform that translates Python codebases to Rust and compiles them to WebAssembly (WASM), with NVIDIA GPU acceleration integrated throughout the entire pipeline. From code analysis to translation to deployment, every stage leverages NVIDIA's AI and compute infrastructure for maximum performance.
β Complete Python β Rust β WASM Pipeline
β NVIDIA Integration Throughout
β Enterprise Features
β Production Quality
PORTALIS uses a multi-agent architecture where each stage is accelerated by NVIDIA technologies:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CLI / Web UI / API β
β (Enterprise Auth, RBAC, SSO) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ORCHESTRATION PIPELINE β
β (Ray on DGX Cloud for distributed processing) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AGENT SWARM LAYER β
β ββββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ β
β β Ingest β Analysis β Transpileβ Build β Package β β
β β β (CUDA) β (NeMo) β (Cargo) β (NIM) β β
β ββββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β NVIDIA ACCELERATION LAYER β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β NeMo LLM Services (Triton) β CUDA Kernels (cuPy) β β
β β Embedding Generation β Parallel AST Processing β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β DEPLOYMENT & VALIDATION β
β Triton Endpoints β NIM Containers β Omniverse Integration β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Component | NVIDIA Technology | Purpose |
|---|---|---|
| Code Analysis | CUDA kernels | Parallel AST traversal for 10,000+ file codebases |
| Translation | NeMo Framework | AI-powered PythonβRust code generation |
| Embeddings | CUDA + Triton | Semantic code similarity and pattern matching |
| Inference | Triton Server | Production model serving with auto-scaling |
| Deployment | NIM | Container packaging for NVIDIA infrastructure |
| Orchestration | DGX Cloud + Ray | Multi-GPU distributed workload management |
| Validation | Omniverse | Visual testing in simulation environments |
| Monitoring | DCGM + Prometheus | GPU utilization and performance metrics |
After publication (coming soon):
# Install from crates.io
cargo install portalis
# Verify installation
portalis --version
Current (development):
# Clone and build from source
git clone https://github.com/portalis/portalis.git
cd portalis
cargo build --release --bin portalis
# Run CLI
./target/release/portalis --version
Zero-friction conversion - Navigate and convert:
# Navigate to your Python project
cd my-python-project/
# Convert to WASM (defaults to current directory)
portalis convert
Or convert specific files/packages:
# Convert a single script
portalis convert calculator.py
# Convert a Python library (creates Rust crate + WASM)
portalis convert ./mylib/
# Convert a directory of scripts
portalis convert ./src/
Auto-detection handles:
__init__.py) β Rust crate + WASM librarySee QUICK_START.md for detailed examples and USE_CASES.md for real-world scenarios.
# Enable GPU acceleration (requires NVIDIA GPU)
export PORTALIS_ENABLE_CUDA=1
export PORTALIS_TRITON_URL=localhost:8000
# Use NeMo for AI-powered translation
export PORTALIS_TRANSLATION_MODE=nemo
export PORTALIS_NEMO_MODEL=portalis-translation-v1
# Run distributed on DGX Cloud
export PORTALIS_DGX_ENDPOINT=https://api.ngc.nvidia.com
export PORTALIS_RAY_ADDRESS=ray://dgx-cluster:10001
portalis translate --input large_project/ --output dist/ --enable-gpu
PORTALIS supports 30+ comprehensive Python feature sets:
| Category | Features | Status |
|---|---|---|
| Basics | Variables, operators, control flow, functions | β Complete |
| Data Structures | Lists, dicts, sets, tuples, comprehensions | β Complete |
| OOP | Classes, inheritance, properties, decorators | β Complete |
| Advanced | Generators, context managers, async/await | β Complete |
| Functional | Lambda, map/filter/reduce, closures | β Complete |
| Modules | Imports, packages, stdlib mapping | β Complete |
| Error Handling | Try/except, custom exceptions, assertions | β Complete |
| Type System | Type hints, generics, protocols | β Complete |
| Meta | Metaclasses, descriptors, __slots__ |
β Complete |
| Stdlib | 50+ stdlib modules mapped to Rust | β Complete |
See PYTHON_LANGUAGE_FEATURES.md for detailed feature list.
# Comprehensive codebase assessment
portalis assess --project ./enterprise-app \
--report report.html \
--format html \
--verbose
# Generates:
# - Compatibility score (0-100)
# - Feature usage analysis
# - Dependency graph
# - Risk assessment
# - Estimated effort
# Bottom-up: Start with leaf modules
portalis plan --strategy bottom-up
# Top-down: Start with entry points
portalis plan --strategy top-down
# Critical-path: Migrate performance bottlenecks first
portalis plan --strategy critical-path
# Incremental: Gradual hybrid Python/Rust deployment
portalis plan --strategy incremental
// Configure tenant quotas
{
"tenant_id": "acme-corp",
"quotas": {
"max_gpus": 16,
"max_requests_per_hour": 10000,
"max_cost_per_day": 5000.00
},
"roles": ["translator", "assessor", "admin"]
}
# Traditional approach: 10,000 files = 30 minutes
# PORTALIS + CUDA: 10,000 files = 2 minutes (15x faster)
# Parallel AST parsing across GPU cores
cuda_engine.parallel_parse(python_files)
# GPU-accelerated embedding generation
embeddings = triton_client.infer(
model="code_embeddings",
inputs={"source_code": code_batches}
)
# NeMo-based translation with context awareness
translation = nemo_client.translate(
source_code=python_code,
context={
"stdlib_usage": ["pathlib", "json", "asyncio"],
"frameworks": ["fastapi", "pydantic"],
"style": "idiomatic_rust"
}
)
# Confidence scoring and alternative suggestions
if translation.confidence < 0.8:
alternatives = nemo_client.generate_alternatives(
python_code, num_candidates=3
)
# Triton model configuration
name: "portalis_translator"
platform: "python"
max_batch_size: 64
instance_group: [
{ count: 4, kind: KIND_GPU } # 4 A100 GPUs
]
dynamic_batching: {
preferred_batch_size: [16, 32, 64]
max_queue_delay_microseconds: 100
}
# Load WASM into Omniverse simulation
omni_bridge.load_wasm_module(
wasm_path="translated_app.wasm",
scene="validation_scene.usd"
)
# Run side-by-side comparison
python_results = run_python_simulation()
wasm_results = omni_bridge.execute_wasm_simulation()
# Visual validation
omni_bridge.compare_outputs(python_results, wasm_results)
| Codebase Size | CPU-Only | GPU (CUDA) | GPU (NeMo) | Speedup |
|---|---|---|---|---|
| Small (100 LOC) | 2s | 1s | 0.5s | 4x |
| Medium (1K LOC) | 45s | 8s | 3s | 15x |
| Large (10K LOC) | 30m | 90s | 45s | 40x |
| XL (100K LOC) | 8h | 15m | 8m | 60x |
DGX A100 (8x A100 80GB)
ββ NeMo Translation: 4 GPUs @ 75% utilization
ββ CUDA Kernels: 2 GPUs @ 60% utilization
ββ Triton Serving: 2 GPUs @ 85% utilization
ββ Throughput: 500 functions/minute
portalis/
βββ agents/ # Translation agents
β βββ transpiler/ # Core Rust transpiler (8K+ LOC)
β β βββ python_ast.rs # Python AST handling
β β βββ python_to_rust.rs # Translation logic
β β βββ stdlib_mapper.rs # Stdlib conversions
β β βββ wasm.rs # WASM bindings
β β βββ tests/ # 30+ feature test suites
β βββ cuda-bridge/ # GPU acceleration
β βββ nemo-bridge/ # NeMo integration
β βββ ...
β
βββ cli/ # Command-line interface
β βββ src/
β βββ commands/ # Assessment, planning commands
β β βββ assess.rs
β β βββ plan.rs
β βββ main.rs
β
βββ core/ # Core platform
β βββ src/
β βββ assessment/ # Codebase analysis
β βββ rbac/ # Access control
β βββ logging.rs # Structured logging
β βββ metrics.rs # Prometheus metrics
β βββ telemetry.rs # OpenTelemetry
β βββ quota.rs # Resource quotas
β βββ sso.rs # SSO integration
β
βββ nemo-integration/ # NeMo LLM services
β βββ config/
β βββ src/
β βββ tests/
β
βββ cuda-acceleration/ # CUDA kernels
β βββ kernels/
β βββ bindings/
β
βββ deployment/
β βββ triton/ # Triton Inference Server
β βββ models/
β βββ configs/
β βββ k8s/
β
βββ nim-microservices/ # NIM packaging
β βββ api/
β βββ k8s/
β βββ Dockerfile
β
βββ dgx-cloud/ # DGX Cloud integration
β βββ config/
β β βββ resource_allocation.yaml
β β βββ ray_cluster.yaml
β βββ monitoring/
β
βββ omniverse-integration/ # Omniverse runtime
β βββ extension/
β βββ demonstrations/
β βββ deployment/
β
βββ monitoring/ # Observability stack
β βββ prometheus/
β βββ grafana/
β βββ alertmanager/
β
βββ examples/ # Example projects
β βββ beta-projects/
β βββ wasm-demo/
β βββ nodejs-example/
β
βββ docs/ # Documentation
β βββ architecture.md
β βββ getting-started.md
β βββ api-reference.md
β
βββ plans/ # Design documents
βββ architecture.md
βββ specification.md
βββ nvidia-integration-architecture.md
PORTALIS follows London School TDD with comprehensive test coverage:
E2E Tests (Omniverse, Real GPU)
/ \
Integration Tests (Mocked GPU)
/ \
Unit Tests (30+ Feature Suites)
# Unit tests (fast, no GPU required)
cargo test --lib
# Integration tests (requires dependencies)
cargo test --test '*'
# With NVIDIA GPU
PORTALIS_ENABLE_CUDA=1 cargo test --features cuda
# E2E tests (Docker + GPU required)
docker-compose -f docker-compose.test.yaml up
pytest tests/e2e/
We welcome contributions! PORTALIS is a production platform with clear contribution areas:
# Fork and clone
git clone https://github.com/your-fork/portalis.git
# Create feature branch
git checkout -b feature/my-enhancement
# Make changes, write tests
cargo test
# Commit and push
git commit -m "Add support for Python walrus operator"
git push origin feature/my-enhancement
# Open pull request
See CONTRIBUTING.md for detailed guidelines.
[Add your license here - e.g., Apache 2.0, MIT]
PORTALIS leverages cutting-edge NVIDIA technologies:
Built with Rust π¦, WebAssembly πΈοΈ, and NVIDIA AI π
PORTALIS - Translating the world's Python code to high-performance WASM, powered by NVIDIA AI infrastructure.