| Crates.io | qtransformers-core |
| lib.rs | qtransformers-core |
| version | 0.1.0 |
| created_at | 2025-12-09 21:31:44.557307+00 |
| updated_at | 2025-12-09 21:31:44.557307+00 |
| description | Quantum-inspired attention mechanisms for transformer models. |
| homepage | https://github.com/kumarlokesh/q-transformers |
| repository | https://github.com/kumarlokesh/q-transformers |
| max_upload_size | |
| id | 1976784 |
| size | 26,014 |
v0.1.0 - Library implementing quantum-inspired attention mechanisms for transformer models.
Benchmarks and evaluation scripts are provided under the benchmarks/ directory. Results depend on hardware, backend configuration, and random seeds; reproduce experiments using the provided scripts rather than relying on summarized claims in the README.
The repository includes a Makefile with common developer tasks. Use the Makefile targets from the project root to build, run checks, and execute tests. The targets orchestrate the toolchain (Python, Rust) and ensure consistent environments across machines.
Common tasks:
make build
make shell
make test-python
make test-rust
make test
If you need or prefer a local Python virtual environment, a local install is supported but may require system toolchains for some dependencies:
git clone https://github.com/kumarlokesh/q-transformers
cd q-transformers
python3 -m venv .venv
source .venv/bin/activate
pip install -U pip setuptools wheel
pip install -e python[dev]
import torch
from qtransformers import QuantumMultiheadAttention
# Drop-in replacement for nn.MultiheadAttention
attn = QuantumMultiheadAttention(
embed_dim=512,
num_heads=8,
quantum_backend="stratified", # Best performing backend
num_samples=32
)
# Use exactly like PyTorch MultiheadAttention
query = torch.randn(10, 32, 512) # seq_len, batch, embed_dim
key = torch.randn(10, 32, 512)
value = torch.randn(10, 32, 512)
output, attn_weights = attn(query, key, value)
from qtransformers import GLUEBenchmarkSuite, QuantumSupremacyVerifier
# Run comprehensive NLP benchmarks
benchmark_suite = GLUEBenchmarkSuite()
results = benchmark_suite.run_full_evaluation(
quantum_model=quantum_model,
classical_model=classical_model
)
# Verify quantum supremacy
verifier = QuantumSupremacyVerifier()
supremacy_results = verifier.verify_quantum_advantage(
quantum_results=results["quantum"],
classical_results=results["classical"]
)
Note: Performance claims are based on internal benchmarks. Results may vary by hardware, configuration, and task. See
benchmarks/for reproduction scripts.
This project is licensed under the MIT License - see the LICENSE file for details.