Multi-Target High-Performance Compute Library
Trueno (Spanish: "thunder") provides unified, high-performance compute primitives across three execution targets:
- CPU SIMD - x86 (SSE2/AVX/AVX2/AVX-512), ARM (NEON), WASM (SIMD128)
- GPU - Vulkan/Metal/DX12/WebGPU via
wgpu - WebAssembly - Portable SIMD128 for browser/edge deployment
use trueno::{Vector, Matrix};
// Vector operations
let a = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0]);
let b = Vector::from_slice(&[5.0, 6.0, 7.0, 8.0]);
// Auto-selects best backend (AVX2/GPU/WASM)
let result = a.add(&b).unwrap();
assert_eq!(result.as_slice(), &[6.0, 8.0, 10.0, 12.0]);
let dot_product = a.dot(&b).unwrap(); // 70.0
let sum = a.sum().unwrap(); // 10.0
let max = a.max().unwrap(); // 4.0
// Matrix operations
let m1 = Matrix::from_vec(2, 2, vec![1.0, 2.0, 3.0, 4.0]).unwrap();
let m2 = Matrix::identity(2);
let product = m1.matmul(&m2).unwrap(); // Matrix multiplication
let transposed = m1.transpose(); // Matrix transpose
// 2D Convolution (image processing, CNNs)
let image = Matrix::from_vec(5, 5, vec![/* 25 pixels */]).unwrap();
let kernel = Matrix::from_vec(3, 3, vec![1.0/9.0; 9]).unwrap(); // 3x3 averaging filter
let filtered = image.convolve2d(&kernel).unwrap(); // Auto-selects GPU for large imagesDot Product (1K elements):
- Trueno AVX-512: 11.9x vs scalar | 1.6x vs NumPy | 2.8x vs PyTorch
Matrix Multiply (500×500):
- Trueno GPU: 2-10x faster than scalar
Replicate:
curl -LsSf https://astral.sh/uv/install.sh | sh # Install UV (one-time)
make bench-comprehensive # 12-17 minutesSee benchmarks/README.md for methodology. Feedback welcome via issues.
Recent comprehensive benchmarking (docs/performance-analysis.md) revealed:
- ✅ GPU beneficial for: Matrix multiplication ONLY (2-10x speedup for 500×500+ matrices)
- ❌ GPU detrimental for: ALL element-wise operations (2-65,000x SLOWER than scalar)
- 🎯 Recommendation: GPU backend disabled for vector operations in v0.2.1+ (matmul only)
Root cause: 14-55ms GPU overhead (buffer allocation + PCIe transfer) dominates element-wise operations which complete in 0.01-10ms on CPU. GPU speedup requires O(N³) compute complexity to amortize transfer costs.
See performance analysis for complete empirical data.
Trueno delivers exceptional performance through multi-level SIMD optimization:
| Operation | Speedup | Use Case |
|---|---|---|
| Dot Product | 340% faster | Machine learning, signal processing |
| Sum Reduction | 315% faster | Statistics, aggregations |
| Max Finding | 348% faster | Data analysis, optimization |
| Element-wise Add | 3-10% faster | Memory-bound (limited SIMD benefit) |
| Element-wise Mul | 5-6% faster | Memory-bound (limited SIMD benefit) |
| Operation | Speedup | Notes |
|---|---|---|
| Dot Product | 182% faster | FMA (fused multiply-add) acceleration |
| Element-wise Add | 15% faster | Memory bandwidth limited |
| Element-wise Mul | 12% faster | Memory bandwidth limited |
Key Insights:
- SIMD excels at compute-intensive operations (dot product, reductions)
- Element-wise operations are memory-bound, limiting SIMD gains
- AVX2's FMA provides significant acceleration for dot products
| Operation | Size | Time | Performance | vs NumPy |
|---|---|---|---|---|
| Matrix Multiply | 64×64 | 8.9 µs | Cache blocking | - |
| Matrix Multiply | 128×128 | 72 µs | 6.4× faster than NumPy | ✅ |
| Matrix Multiply | 256×256 | 538 µs | 6% faster than NumPy | ✅ |
| Matrix Multiply | 512×512 | 5.3 ms | 3-level blocking | 2.9× faster |
| Matrix Multiply | 1024×1024 | 47.4 ms | L3 optimization | 1.6× slower |
| Matrix Transpose | 256×256 | 69.1 µs | Cache-optimized | - |
| Matrix-Vector | 512×512 | 139.8 µs | SIMD dot products | - |
Advanced SIMD Optimization (Phase 2/3 - v0.6.0+):
- 4×1 AVX2 Micro-kernel: Fused Multiply-Add (FMA) instructions, register blocking
- 3-level Cache Hierarchy: L3 (256×256) → L2 (64×64) → micro-kernel for matrices ≥512×512
- 2-level Cache Blocking: L2 (64×64) → micro-kernel for 32×32 to 511×511
- Smart Thresholding: Matrices ≤32×32 use simple path (avoids blocking overhead)
- Zero-Allocation: No Vec allocations in hot path
- NumPy Parity: Matches/beats NumPy + OpenBLAS for 128×128 and 256×256
- Efficiency: 77% of theoretical AVX2 peak (48 GFLOPS @ 3.0 GHz)
| Operation | Input Size | Kernel | Time | Backend |
|---|---|---|---|---|
| Convolution | 32×32 | 3×3 | ~6.78 µs | Scalar |
| Convolution | 128×128 | 3×3 | ~1.2 ms | Scalar |
| Convolution | 256×256 | 3×3 | ~4.8 ms | Scalar/GPU threshold |
| Convolution | 512×512 | 3×3 | ~20 ms (scalar) | GPU (10-50x target) |
| Sobel Edge Detection | 512×512 | 3×3 | - | GPU-accelerated |
GPU Acceleration Strategy (OpComplexity::High):
- GPU Threshold: >10,000 output elements (e.g., 100×100 output)
- Example: 512×512 input with 3×3 kernel → 510×510 output = 260,100 elements
- Workgroups: 16×16 threads (256 threads per workgroup)
- Valid Padding: Output size = (input - kernel + 1) for each dimension
- Use Cases: Image processing, CNN inference, feature extraction
Automatic Backend Selection:
- Small images (<10K elements): Scalar baseline (~6.78 µs for 32×32)
- Large images (>10K elements): GPU compute shader (10-50x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: Edge Detection with Sobel Operator
use trueno::backends::gpu::GpuBackend;
// Sobel X kernel (vertical edge detection)
let sobel_x = vec![
-1.0, 0.0, 1.0,
-2.0, 0.0, 2.0,
-1.0, 0.0, 1.0,
];
// 512×512 grayscale image (flattened row-major)
let image: Vec<f32> = vec![...]; // 262,144 elements
// GPU convolution for large images (>10K output elements)
let mut gpu = GpuBackend::new();
let edges = gpu.convolve2d(&image, &sobel_x, 512, 512, 3, 3).unwrap();
// Output: 510×510 = 260,100 elements (GPU-accelerated)| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| ReLU | 10K | ~40 µs | - | Below threshold |
| ReLU | 100K | ~400 µs | ~40 µs | 10x target |
| ReLU | 1M | ~4 ms | ~80 µs | 50x target |
GPU Acceleration Strategy (OpComplexity::Low):
- GPU Threshold: >100,000 elements
- Operation: Simple element-wise max(0, x)
- Workgroups: 256 threads per workgroup (1D dispatch)
- Use Cases: Neural network inference, batch activation processing
Automatic Backend Selection:
- Small vectors (<100K elements): Scalar/SIMD (iterator-based)
- Large vectors (>100K elements): GPU compute shader (10-50x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: Neural Network Inference
use trueno::Vector;
// Process large activation batch (e.g., ResNet-50 layer)
let activations = Vector::from_slice(&vec![...]); // 1M neurons
let output = activations.relu().unwrap(); // Auto-uses GPU for >100K elements| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| Leaky ReLU | 10K | ~42 µs | - | Below threshold |
| Leaky ReLU | 100K | ~420 µs | ~42 µs | 10x target |
| Leaky ReLU | 1M | ~4.2 ms | ~85 µs | 50x target |
GPU Acceleration Strategy (OpComplexity::Low):
- GPU Threshold: >100,000 elements
- Operation: Element-wise leaky_relu(x, α) = x if x > 0, else αx
- Workgroups: 256 threads per workgroup (1D dispatch)
- Parameters: Runtime negative_slope (α) via uniform buffer
- Use Cases: GANs, deep networks (prevents "dying ReLU" problem)
Automatic Backend Selection:
- Small vectors (<100K elements): Scalar/SIMD (iterator-based)
- Large vectors (>100K elements): GPU compute shader (10-50x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: GAN Generator Network
use trueno::Vector;
// Leaky ReLU for GAN generator (prevents vanishing gradients)
let hidden = Vector::from_slice(&vec![...]); // 512K hidden units
let activated = hidden.leaky_relu(0.01).unwrap(); // Auto-uses GPU for >100K elements| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| ELU | 10K | ~55 µs | - | Below threshold |
| ELU | 100K | ~550 µs | ~55 µs | 10x target |
| ELU | 1M | ~5.5 ms | ~110 µs | 50x target |
GPU Acceleration Strategy (OpComplexity::Low):
- GPU Threshold: >100,000 elements
- Operation: Element-wise elu(x, α) = x if x > 0, else α(e^x - 1)
- Workgroups: 256 threads per workgroup (1D dispatch)
- Parameters: Runtime alpha (α) via uniform buffer
- Use Cases: Deep networks, smooth gradients, improved learning dynamics
Automatic Backend Selection:
- Small vectors (<100K elements): Scalar/SIMD (iterator-based)
- Large vectors (>100K elements): GPU compute shader (10-50x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: Deep Residual Network
use trueno::Vector;
// ELU for deep ResNet (smooth gradients prevent vanishing/exploding)
let residual = Vector::from_slice(&vec![...]); // 256K hidden units
let activated = residual.elu(1.0).unwrap(); // Auto-uses GPU for >100K elements| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| Clip | 10K | ~45 µs | - | Below threshold |
| Clip | 100K | ~450 µs | ~45 µs | 10x target |
| Clip | 1M | ~4.5 ms | ~90 µs | 50x target |
GPU Acceleration Strategy (OpComplexity::Low):
- GPU Threshold: >100,000 elements
- Operation: Element-wise clamp(x, min_val, max_val)
- Workgroups: 256 threads per workgroup (1D dispatch)
- Parameters: Runtime min/max bounds via uniform buffer
- Use Cases: Gradient clipping, value bounding, range normalization
Automatic Backend Selection:
- Small vectors (<100K elements): Scalar/SIMD (iterator-based)
- Large vectors (>100K elements): GPU compute shader (10-50x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: Gradient Clipping
use trueno::Vector;
// Clip gradients for stable training
let gradients = Vector::from_slice(&vec![...]); // 500K parameters
let clipped = gradients.clip(-1.0, 1.0).unwrap(); // Auto-uses GPU for >100K elements| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| Sigmoid | 10K | ~60 µs | - | Below threshold |
| Sigmoid | 100K | ~600 µs | ~60 µs | 10x target |
| Sigmoid | 1M | ~6 ms | ~120 µs | 50x target |
GPU Acceleration Strategy (OpComplexity::Low):
- GPU Threshold: >100,000 elements
- Operation: Element-wise σ(x) = 1 / (1 + e^(-x))
- Workgroups: 256 threads per workgroup (1D dispatch)
- Numerical Stability: Separate handling for positive/negative inputs
- Use Cases: Binary classification, attention mechanisms, gating functions
Automatic Backend Selection:
- Small vectors (<100K elements): Scalar (iterator-based with stability checks)
- Large vectors (>100K elements): GPU compute shader (10-50x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: Neural Network Layer
use trueno::Vector;
// Sigmoid activation for binary classification
let logits = Vector::from_slice(&vec![...]); // 500K neurons
let activations = logits.sigmoid().unwrap(); // Auto-uses GPU for >100K elements| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| Tanh | 10K | ~55 µs | - | Below threshold |
| Tanh | 100K | ~550 µs | ~55 µs | 10x target |
| Tanh | 1M | ~5.5 ms | ~110 µs | 50x target |
GPU Acceleration Strategy (OpComplexity::Low):
- GPU Threshold: >100,000 elements
- Operation: Element-wise tanh(x) = (e^x - e^(-x)) / (e^x + e^(-x))
- Workgroups: 256 threads per workgroup (1D dispatch)
- Numerical Stability: Saturation handling for |x| > 20
- Use Cases: LSTM, GRU, recurrent neural networks, traditional activation
Automatic Backend Selection:
- Small vectors (<100K elements): Scalar (standard library tanh)
- Large vectors (>100K elements): GPU compute shader (10-50x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: LSTM Cell
use trueno::Vector;
// Tanh activation in LSTM forget/input gates
let cell_state = Vector::from_slice(&vec![...]); // 250K hidden units
let activated = cell_state.tanh().unwrap(); // Auto-uses GPU for >100K elements| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| Swish | 10K | ~70 µs | - | Below threshold |
| Swish | 100K | ~700 µs | ~70 µs | 10x target |
| Swish | 1M | ~7 ms | ~140 µs | 50x target |
GPU Acceleration Strategy (OpComplexity::Low):
- GPU Threshold: >100,000 elements
- Operation: Element-wise swish(x) = x * σ(x) = x / (1 + e^(-x))
- Workgroups: 256 threads per workgroup (1D dispatch)
- Numerical Stability: Separate handling for positive/negative inputs
- Use Cases: Transformers (BERT, GPT, T5), modern neural networks, SiLU activation
- Also known as: SiLU (Sigmoid Linear Unit)
Automatic Backend Selection:
- Small vectors (<100K elements): Scalar (iterator-based with stability checks)
- Large vectors (>100K elements): GPU compute shader (10-50x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: Transformer Inference
use trueno::Vector;
// Swish activation in transformer feed-forward network
let ffn_output = Vector::from_slice(&vec![...]); // 768K hidden units (BERT-large)
let activated = ffn_output.swish().unwrap(); // Auto-uses GPU for >100K elements| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| GELU | 10K | ~80 µs | - | Below threshold |
| GELU | 100K | ~800 µs | ~80 µs | 10x target |
| GELU | 1M | ~8 ms | ~160 µs | 50x target |
GPU Acceleration Strategy (OpComplexity::Low):
- GPU Threshold: >100,000 elements
- Operation: Element-wise GELU(x) ≈ 0.5 * x * (1 + tanh(√(2/π) * (x + 0.044715 * x³)))
- Workgroups: 256 threads per workgroup (1D dispatch)
- Approximation: Tanh-based (standard in production)
- Use Cases: BERT, GPT-2, GPT-3, Vision Transformers, modern NLP models
- THE activation: Standard in transformer architectures since 2018
Automatic Backend Selection:
- Small vectors (<100K elements): Scalar (iterator-based tanh approximation)
- Large vectors (>100K elements): GPU compute shader (10-50x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: BERT Inference
use trueno::Vector;
// GELU activation in BERT transformer layer
let ffn_hidden = Vector::from_slice(&vec![...]); // 3.07M elements (BERT-base: 768 * 4 * 1024 batch)
let activated = ffn_hidden.gelu().unwrap(); // Auto-uses GPU for >100K elements📖 See Performance Guide and AVX2 Benchmarks for detailed analysis.
| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| Softmax | 10K | ~120 µs | ~60 µs | 2x target |
| Softmax | 100K | ~1.2 ms | ~120 µs | 10x target |
| Softmax | 1M | ~12 ms | ~600 µs | 20x target |
GPU Acceleration Strategy (OpComplexity::Medium):
- GPU Threshold: >10,000 elements (multi-pass overhead higher than element-wise ops)
- Operation: Multi-pass softmax(x)[i] = exp(x[i] - max) / sum(exp(x - max))
- Implementation: 4-pass GPU reduction
- Pass 1: Max reduction (parallel, numerical stability)
- Pass 2: Exp-subtract (element-wise exp(x - max))
- Pass 3: Sum reduction (parallel sum of exp values)
- Pass 4: Normalize (element-wise division by sum)
- Workgroups: 256 threads per workgroup (1D dispatch)
- Numerical Stability: Subtracts max before exp to prevent overflow
- Use Cases: Classification networks, attention mechanisms, transformers
Automatic Backend Selection:
- Small vectors (<10K elements): Scalar (multi-pass CPU implementation)
- Large vectors (>10K elements): GPU compute shader (5-20x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: Attention Mechanism
use trueno::Vector;
// Softmax in multi-head attention (transformer)
let attention_scores = Vector::from_slice(&vec![...]); // 512K scores (64 heads * 128 seq * 64 seq)
let attention_weights = attention_scores.softmax().unwrap(); // Auto-uses GPU for >10K elements| Operation | Vector Size | Time (Scalar) | Time (GPU Target) | Speedup |
|---|---|---|---|---|
| Log-Softmax | 10K | ~130 µs | ~65 µs | 2x target |
| Log-Softmax | 100K | ~1.3 ms | ~130 µs | 10x target |
| Log-Softmax | 1M | ~13 ms | ~650 µs | 20x target |
GPU Acceleration Strategy (OpComplexity::Medium):
- GPU Threshold: >10,000 elements (multi-pass overhead)
- Operation: Multi-pass log_softmax(x)[i] = x[i] - max - log(sum(exp(x - max)))
- Implementation: 4-pass GPU reduction (same as softmax but final step computes log)
- Pass 1: Max reduction (parallel, numerical stability)
- Pass 2: Exp-subtract (element-wise exp(x - max))
- Pass 3: Sum reduction (parallel sum of exp values)
- Pass 4: Log-normalize (element-wise x - max - log(sum))
- Workgroups: 256 threads per workgroup (1D dispatch)
- Numerical Stability: More stable than computing log(softmax(x))
- Use Cases: Cross-entropy loss, NLL loss, classification training
Automatic Backend Selection:
- Small vectors (<10K elements): Scalar (multi-pass CPU implementation)
- Large vectors (>10K elements): GPU compute shader (5-20x speedup target)
- Graceful fallback to scalar if GPU unavailable
Example: Cross-Entropy Loss
use trueno::Vector;
// Log-softmax for stable cross-entropy loss computation
let logits = Vector::from_slice(&vec![...]); // 100K logits (1000 batch * 100 classes)
let log_probs = logits.log_softmax().unwrap(); // Auto-uses GPU for >10K elements
// Compute NLL loss: -log_probs[target_class]
// More numerically stable than log(softmax(x))📖 See Performance Guide and AVX2 Benchmarks for detailed analysis.
- 🚀 Write Once, Optimize Everywhere: Single algorithm, multiple backends
- ⚡ Runtime Dispatch: Auto-select best implementation based on CPU features
- 🎮 GPU Acceleration:
⚠️ Matmul ONLY (2-10x for 500×500+) - GPU disabled for all vector operations after empirical benchmarking showed 2-65,000x slowdown (see performance-analysis.md) - 🛡️ Zero Unsafe in Public API: Safety via type system,
unsafeisolated in backends - 📊 Benchmarked Performance: Every optimization proves ≥10% speedup
- 🧪 Extreme TDD: >90% test coverage, mutation testing, property-based tests
- 🎯 Production Ready: PMAT quality gates, Toyota Way principles
🎯 Strategic Positioning: Trueno is designed as a drop-in replacement for NumPy (~35% complete) and PyTorch (~15% complete) in Rust applications. See PyTorch/NumPy Replacement Specification for detailed roadmap to full parity.
// Same code runs optimally on x86, ARM, WASM, GPU
let result = a.add(&b).unwrap();Trueno automatically selects the best backend:
- x86_64: AVX2 → AVX → SSE2 → Scalar (AVX-512 used for compute-bound operations only)
- ARM: NEON → Scalar
- WASM: SIMD128 → Scalar
- GPU (optional): Vulkan/Metal/DX12/WebGPU (>1000×1000 matrices)
Operation-Aware Backend Selection (v0.7.0+):
- Compute-bound (dot, max, min): Prefers AVX-512 (6-17x speedup)
- Memory-bound (add, sub, mul): Prefers AVX2 (avoids AVX-512 regression)
- See AVX512_ANALYSIS.md for detailed analysis
// Public API is 100% safe Rust
let result = vector.add(&other)?; // Returns Result<Vector, TruenoError>
// Size mismatches caught at runtime
let a = Vector::from_slice(&[1.0, 2.0]);
let b = Vector::from_slice(&[1.0, 2.0, 3.0]);
assert!(a.add(&b).is_err()); // SizeMismatch errorCompute-Bound Operations (High Arithmetic Intensity):
| Operation | Size | Speedup vs Scalar | Backend | Status |
|---|---|---|---|---|
dot() |
100 | 6.4x | AVX-512 | ✅ Validated |
dot() |
1K | 17.2x | AVX-512 | ✅ Outstanding! |
dot() |
10K | 8.8x | AVX-512 | ✅ Validated |
max() |
1K | 12.1x | AVX-512 | ✅ Validated |
min() |
1K | 11.8x | AVX-512 | ✅ Validated |
Memory-Bound Operations (Low Arithmetic Intensity):
| Operation | Size | Speedup vs Scalar | Backend | Status |
|---|---|---|---|---|
add() |
100 | 1.0x | AVX2 | ✅ Realistic |
add() |
1K | 1.0-1.2x | AVX2 | ✅ Realistic |
mul() |
1K | 1.0x | AVX2 | ✅ Realistic |
sub() |
1K | 1.0x | AVX2 | ✅ Realistic |
Key Insight: Memory-bound operations are limited by DDR4 bandwidth (~50 GB/s), not computation. SIMD provides minimal benefit for add/mul/sub.
Performance Analysis Documents:
- BENCHMARK_ANALYSIS.md - Complete benchmark overview
- AVX512_ANALYSIS.md - Why AVX-512 hurts memory-bound operations
- AVX512_COMPUTE_BOUND_VALIDATION.md - AVX-512 excellence for compute-bound
Add to your Cargo.toml:
[dependencies]
trueno = "0.1"Enable GPU support for very large matrices:
[dependencies]
trueno = { version = "0.1", features = ["gpu"] }Requirements:
- Vulkan, Metal, or DirectX 12 compatible GPU
- wgpu runtime dependencies
- GPU backend automatically activates for matrices >1000×1000
For bleeding-edge features:
[dependencies]
trueno = { git = "https://github.com/paiml/trueno", features = ["gpu"] }use trueno::Vector;
// Element-wise addition
let a = Vector::from_slice(&[1.0, 2.0, 3.0]);
let b = Vector::from_slice(&[4.0, 5.0, 6.0]);
let sum = a.add(&b).unwrap();
assert_eq!(sum.as_slice(), &[5.0, 7.0, 9.0]);
// Element-wise multiplication
let product = a.mul(&b).unwrap();
assert_eq!(product.as_slice(), &[4.0, 10.0, 18.0]);
// Dot product
let dot = a.dot(&b).unwrap();
assert_eq!(dot, 32.0); // 1*4 + 2*5 + 3*6
// Reductions
let total = a.sum().unwrap(); // 6.0
let maximum = a.max().unwrap(); // 3.0Automatic Selection (Recommended):
use trueno::Vector;
// Auto-selects best backend based on CPU features
let v = Vector::from_slice(&data);
// Operations automatically use optimal backend:
// - Compute-bound (dot, max, min): AVX-512 if available (6-17x)
// - Memory-bound (add, sub, mul): AVX2 (avoids AVX-512 regression)
let dot = a.dot(&b).unwrap(); // Uses AVX-512 (17x speedup!)
let sum = a.add(&b).unwrap(); // Uses AVX2 (avoids slowdown)Operation-Aware Selection (v0.7.0+):
use trueno::{select_backend_for_operation, OperationType, Backend};
// Select backend for specific operation type
let backend = select_backend_for_operation(OperationType::ComputeBound);
// Returns: Backend::AVX512 (for dot, max, min)
let backend = select_backend_for_operation(OperationType::MemoryBound);
// Returns: Backend::AVX2 (for add, sub, mul - avoids AVX-512)Explicit Backend (for testing/benchmarking):
use trueno::{Vector, Backend};
// Force specific backend
let v = Vector::from_slice_with_backend(&data, Backend::AVX2);
let v = Vector::from_slice_with_backend(&data, Backend::AVX512); // May be slower!
let v = Vector::from_slice_with_backend(&data, Backend::Scalar);Why Operation-Aware Matters:
- AVX-512 is 33% slower than scalar for multiplication (memory-bound)
- AVX-512 is 17x faster than scalar for dot product (compute-bound)
- Automatic selection ensures you get the best of both worlds
use trueno::Matrix;
// Create a 5×5 input image
let image = Matrix::from_vec(
5, 5,
vec![
0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 9.0, 0.0, 0.0, // Center pixel
0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0,
]
).unwrap();
// 3×3 averaging filter (blur)
let kernel_val = 1.0 / 9.0;
let kernel = Matrix::from_vec(3, 3, vec![kernel_val; 9]).unwrap();
// Apply convolution (valid padding)
let filtered = image.convolve2d(&kernel).unwrap();
// Output is 3×3 (5 - 3 + 1 = 3)
assert_eq!(filtered.rows(), 3);
assert_eq!(filtered.cols(), 3);
assert!((filtered.get(1, 1).unwrap() - 1.0).abs() < 1e-5); // Center smoothed
// Sobel edge detection (horizontal edges)
let sobel_h = Matrix::from_vec(
3, 3,
vec![
-1.0, -2.0, -1.0,
0.0, 0.0, 0.0,
1.0, 2.0, 1.0,
]
).unwrap();
let edges = image.convolve2d(&sobel_h).unwrap();
// GPU acceleration for large images
let large_image = Matrix::zeros(512, 512); // 512×512 image
let result = large_image.convolve2d(&kernel).unwrap(); // Auto-uses GPU (>10K elements)use trueno::{Vector, TruenoError};
let a = Vector::from_slice(&[1.0, 2.0]);
let b = Vector::from_slice(&[1.0, 2.0, 3.0]);
match a.add(&b) {
Ok(result) => println!("Sum: {:?}", result.as_slice()),
Err(TruenoError::SizeMismatch { expected, actual }) => {
eprintln!("Size mismatch: expected {}, got {}", expected, actual);
}
Err(e) => eprintln!("Error: {}", e),
}Trueno integrates with the Pragmatic AI Labs transpiler ecosystem:
# Ruchy syntax
let v = Vector([1.0, 2.0]) + Vector([3.0, 4.0])
# Transpiles to: trueno::Vector::add()# Python/NumPy code
import numpy as np
result = np.dot(a, b)
# Transpiles to: trueno::Vector::dot(&a, &b)// C SIMD intrinsics
__m256 result = _mm256_add_ps(a, b);
// Transpiles to: trueno::Vector::add() (safe!)# Install Rust (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install development tools
make install-tools# Development build
make build
# Release build (optimized)
make build-release
# Run tests
make test
# Fast test run (<5 min target)
make test-fastTrueno enforces EXTREME TDD quality standards:
# Run all quality gates (pre-commit)
make quality-gates
# Individual gates
make lint # Zero warnings policy
make fmt-check # Format verification
make test-fast # All tests (<5 min)
make coverage # >85% required (<10 min)
make mutate # Mutation testing (>80% kill rate)Quality Metrics:
- ✅ Test Coverage: 100% (target >85%)
- ✅ PMAT TDG Score: 96.1/100 (A+)
- ✅ Clippy Warnings: 0
- ✅ Property Tests: 10 tests × 100 cases each
- ✅ Cyclomatic Complexity: Median 1.0 (very low)
# Technical Debt Grading
make pmat-tdg
# Complexity analysis
make pmat-analyze
# Repository health score
make pmat-scoreTrueno integrates Renacer for deep performance profiling:
# Profile benchmarks to find bottlenecks
make profile
# Generate flamegraph visualization
make profile-flamegraph
# Profile specific benchmark
make profile-bench BENCH=vector_ops
# Profile test suite
make profile-testProfiling Use Cases:
- 🔬 SIMD Validation: Verify optimizations show expected speedups (2-8x)
- 🎯 Hot Path Analysis: Identify top 10 functions consuming most time
- 💾 Memory Bottlenecks: Detect cache misses and memory access patterns
- 🚀 Backend Selection: Validate runtime dispatch overhead is minimal
- 📊 Flamegraph Visualization: Visual analysis of performance characteristics
Example Output:
🔬 Profiling benchmark: vector_ops
I/O Bottleneck: memcpy() - 15.2ms (42% of runtime)
Hot Functions:
1. _mm256_add_ps - 3.4ms (9.4%)
2. Vector::dot - 2.1ms (5.8%)
3. backend_dispatch - 0.3ms (0.8%)
Trueno uses Renacer 0.6.2 for syscall-level performance regression detection:
# Capture golden traces (performance baselines)
./scripts/capture_golden_traces.sh
# View trace summary
cat golden_traces/backend_detection_summary.txtPerformance Baselines (v0.7.0):
backend_detection: 0.73ms, 87 syscalls ✅matrix_operations: 1.56ms, 168 syscalls ✅activation_functions: 1.30ms, 159 syscalls ✅ml_similarity: 0.82ms, 109 syscalls ✅ (fastest)
Use Cases:
- 🔒 Regression Detection: CI fails if syscall count/latency exceeds budget
- 🚨 PCIe Bottleneck Detection: Warns if GPU transfers >> compute time
- 📊 Build-Time Assertions: Enforce performance contracts (
renacer.toml) - 🔍 Source Correlation: Map syscalls to Rust source code
See docs/integration-report-golden-trace.md for details.
Trueno uses multi-layered testing:
- Unit Tests (30 tests): Basic functionality, edge cases, error paths
- Property Tests (10 tests × 100 cases): Mathematical properties verification
- Commutativity:
a + b == b + a - Associativity:
(a + b) + c == a + (b + c) - Identity elements:
a + 0 == a,a * 1 == a - Distributive:
a * (b + c) == a*b + a*c
- Commutativity:
- Integration Tests: Backend selection, large datasets
- Benchmarks: Performance regression prevention (Criterion.rs)
- Mutation Tests: Test suite effectiveness (>80% kill rate)
Run property tests with verbose output:
cargo test property_tests -- --nocaptureMakefile Targets (Recommended):
# Complete comparison suite (Rust + NumPy + PyTorch + Analysis)
make bench-comprehensive # ~12-17 minutes, interactive confirmation
# Individual components
make bench # Rust benchmarks only (~10-15 min)
make bench-python # Python benchmarks only (~2 min)
make bench-compare-frameworks # Generate comparison reportAlternative (Direct script execution):
./benchmarks/run_all.sh # Same as make bench-comprehensiveWhat's Included:
- ✅ Trueno benchmarks (Rust/Criterion) - 10-15 minutes
- ✅ Python benchmarks (NumPy/PyTorch) - 2-3 minutes
- ✅ Comparative analysis and report generation - <1 minute
Results:
benchmarks/BENCHMARK_RESULTS.md- Performance comparison reportbenchmarks/comparison_summary.json- Machine-readable datatarget/criterion/- Detailed Criterion benchmark data
Success Criteria (v0.3.0): Trueno within 20% of NumPy for ≥80% of 1D operations
📊 Performance Results (v0.3.0 - Comprehensive Benchmarks):
✅ Trueno dramatically outperforms NumPy and PyTorch:
- 88.5% faster than NumPy (54/61 comparisons)
- 90.2% faster than PyTorch (55/61 comparisons)
Extreme speedups on reductions (small vectors):
sum: 310.97x faster than NumPy (100 elements)max: 356.30x faster than NumPy (100 elements)dot: 123.97x faster than NumPy (100 elements)norm_l2: 178.33x faster than NumPy (100 elements)
Consistent wins on element-wise operations:
add: 1.44-12.53x faster than NumPymul: 1.44-17.94x faster than NumPysub: 1.57-12.78x faster than NumPydiv: 1.54-10.27x faster than NumPy
Optimizations needed (slower than NumPy):
tanhat large sizes (5.59x slower at 100K elements)reluat 1M elements (8.32x slower - investigation needed)
Architecture: AVX-512 dominates reductions, AVX2 optimal for element-wise ops
See full report: benchmarks/BENCHMARK_RESULTS.md
See benchmarks/README.md for detailed documentation.
# Run all Rust benchmarks
make bench
# Benchmark specific operation
cargo bench -- add
cargo bench -- dot
# GPU benchmarks (if available)
make bench-gpu
# Save baseline for regression detection
make bench-save-baseline
make bench-compareBenchmark results stored in target/criterion/:
- Throughput (elements/second)
- Latency (mean, median, p95, p99)
- Backend comparison (Scalar vs SSE2 vs AVX2 vs AVX-512)
- Statistical analysis (outliers, confidence intervals)
Operations Benchmarked (25 total):
- Element-wise:
add,sub,mul,div,scale,abs,clamp,lerp,fma - Reductions:
dot,sum,max,min,argmax,argmin,norm_l1,norm_l2,norm_linf - Activations:
relu,sigmoid,tanh,gelu,swish,exp,softmax,log_softmax
Rust: Included in dev-dependencies (Criterion)
Python (for comprehensive comparison):
# Install UV (Rust-based Python package manager)
curl -LsSf https://astral.sh/uv/install.sh -o /tmp/uv-install.sh
bash /tmp/uv-install.sh
rm -f /tmp/uv-install.sh
# Dependencies installed automatically by make targetsTrueno includes several runnable examples demonstrating real-world use cases:
# Machine Learning: Cosine similarity, L2 normalization, k-NN
cargo run --release --example ml_similarity
# Performance: Compare Scalar vs SSE2 backends
cargo run --release --example performance_demo
# Backend Detection: Runtime CPU feature detection
cargo run --release --example backend_detectionML Example Features:
- Document similarity for recommendation systems
- Feature normalization for neural networks
- k-Nearest Neighbors classification
- Demonstrates 340% speedup for dot products
See examples/ directory for complete code.
trueno/
├── src/
│ ├── lib.rs # Public API, backend enum, auto-selection
│ ├── error.rs # Error types (TruenoError)
│ ├── vector.rs # Vector<T> implementation
│ └── backends/ # Backend implementations (future)
│ ├── scalar.rs
│ ├── simd/
│ │ ├── avx2.rs
│ │ ├── avx512.rs
│ │ └── neon.rs
│ ├── gpu.rs
│ └── wasm.rs
├── benches/ # Criterion benchmarks (future)
├── docs/
│ └── specifications/ # Design specifications
├── Cargo.toml # Dependencies, optimization flags
├── Makefile # Quality gates, development commands
└── README.md # This file
- Core
Vector<f32>API (add, mul, dot, sum, max) - Error handling with
TruenoError - 100% test coverage (40 tests)
- Property-based tests (PROPTEST_CASES=100)
- PMAT quality gates integration
- Documentation and README
- Runtime CPU feature detection (
is_x86_feature_detected!) - SSE2 implementation (baseline x86_64)
- Benchmarks proving ≥10% speedup (66.7% of tests, avg 178.5%)
- Auto-dispatch based on CPU features
- Backend trait architecture
- Comprehensive performance analysis
- AVX2 implementation with FMA support (256-bit SIMD)
- Benchmarks proving exceptional speedups (1.82x for dot product)
- Performance analysis and documentation
- All quality gates passing (0 warnings, 78 tests)
- ARM NEON implementation (128-bit SIMD)
- Runtime feature detection (ARMv7/ARMv8/AArch64)
- Cross-platform compilation support
- Comprehensive tests with cross-validation
- Benchmarks on ARM hardware (pending ARM access)
- WASM SIMD128 implementation (128-bit SIMD)
- All 5 operations with f32x4 intrinsics
- Comprehensive tests with cross-validation
- Browser deployment example (future)
- Edge computing use case (future)
-
wgpuintegration (optionalgpufeature flag) - Compute shader kernels (WGSL): matmul, vec_add, dot product
- Host-device memory transfer with async execution
- GPU dispatch heuristics (>1000×1000 for matmul)
- Automatic fallback to SIMD/CPU if GPU unavailable
- Vector operations on GPU (vec_add, dot product with parallel reduction)
- Performance benchmarks (GPU vs Scalar baseline validation)
- Multi-GPU support (deferred to future phase)
- GPU reductions (sum, max, min) (deferred to future phase)
Phase 6 Status:
- Element-wise subtraction (sub) and division (div)
- Reductions: min, max, sum, sum_kahan (Kahan summation)
- Index finding: argmax, argmin
- Vector norms: norm_l2 (Euclidean norm), normalize (unit vector)
- Activation functions: ReLU, Leaky ReLU, ELU, Sigmoid, Softmax/Log-Softmax, GELU, Swish/SiLU
- Preprocessing: zscore, minmax_normalize, clip
- Statistical operations: mean, variance, stddev, covariance, correlation
- Matrix type with row-major storage (NumPy-compatible)
- Matrix multiplication (matmul) - naive O(n³)
- Matrix transpose
- Matrix-vector operations (matvec, vecmat)
- Comprehensive examples (matrix_operations.rs)
- SIMD-optimized matmul (Vector::dot with transpose optimization)
- Backend equivalence tests (naive vs SIMD)
- GPU dispatch for large matrices (>1000×1000 with wgpu)
Phase 8 Status: ✅ COMPLETE - Full matrix operations with 3-tier backend selection. 759 tests passing (637 lib + 19 integration + 103 bench). Matrix multiplication automatically selects optimal backend: GPU for ≥500×500 matrices (empirical: 2-10x speedup), SIMD for >64×64 matrices (2-8x speedup), naive for smaller matrices (minimal overhead). GPU backend uses wgpu with WGSL compute shaders (16×16 workgroups), async execution via pollster, and graceful CPU fallback. See performance-analysis.md for complete empirical validation.
Phase 7 Status: ✅ COMPLETE - Core vector operations with 587 tests passing. The library now supports:
- Element-wise operations: add, sub, mul, div, abs (absolute value), neg (negation/unary minus), clamp (range constraint), lerp (linear interpolation), fma (fused multiply-add), sqrt (square root), recip (reciprocal), pow (power), exp (exponential), ln (natural logarithm), sin (sine), cos (cosine), tan (tangent), asin (arcsine), acos (arccosine), atan (arctangent), sinh (hyperbolic sine), cosh (hyperbolic cosine), tanh (hyperbolic tangent), asinh (inverse hyperbolic sine), acosh (inverse hyperbolic cosine), atanh (inverse hyperbolic tangent), floor (round down), ceil (round up), round (round to nearest), trunc (truncate toward zero), fract (fractional part), signum (sign function), copysign (copy sign from one vector to another), minimum (element-wise minimum of two vectors), maximum (element-wise maximum of two vectors)
- Scalar operations: scale (scalar multiplication with full SIMD support)
- Dot product: Optimized for ML/scientific computing
- Reductions: sum (naive + Kahan), min, max, sum_of_squares, mean (arithmetic average), variance (population variance), stddev (standard deviation), covariance (population covariance between two vectors), correlation (Pearson correlation coefficient)
- Activation functions: relu (rectified linear unit - max(0, x)), leaky_relu (leaky ReLU with configurable negative slope), elu (exponential linear unit with smooth gradients), sigmoid (logistic function - 1/(1+e^-x)), softmax (convert logits to probability distribution), log_softmax (numerically stable log of softmax for cross-entropy loss), gelu (Gaussian Error Linear Unit - smooth activation used in transformers like BERT/GPT), swish/silu (Swish/Sigmoid Linear Unit - self-gated activation used in EfficientNet/MobileNet v3)
- Preprocessing: zscore (z-score normalization/standardization), minmax_normalize (min-max scaling to [0,1] range), clip (constrain values to [min,max] range)
- Index operations: argmin, argmax
- Vector norms: L1 (Manhattan), L2 (Euclidean), L∞ (max norm), normalization to unit vectors
- Numerical stability: Kahan summation for accurate floating-point accumulation
- FMA optimization: Hardware-accelerated fused multiply-add on AVX2 and NEON platforms
- Mathematical functions: Element-wise square root, reciprocal, power, exponential, logarithm, trigonometric (sine, cosine, tangent), inverse trigonometric (arcsine, arccosine, arctangent), hyperbolic functions (sinh, cosh, tanh), and inverse hyperbolic functions (asinh, acosh, atanh) for ML (neural network activations), signal processing (waveforms, oscillators, phase recovery, FM demodulation), physics simulations, graphics (perspective projection, inverse transformations, lighting models, camera orientation), navigation (GPS, spherical trigonometry, bearing calculations, heading calculations), robotics (orientation calculations, inverse kinematics, steering angles), and Fourier analysis
We welcome contributions! Please follow these guidelines:
-
Quality Gates: All PRs must pass
make quality-gates- Zero clippy warnings
- 100% formatted code
- All tests passing
- Coverage >85%
-
Testing: Include tests for new features
- Unit tests for basic functionality
- Property tests for mathematical operations
- Benchmarks for performance claims
-
Documentation: Update README and docs for new features
-
Toyota Way Principles:
- Jidoka (built-in quality): Tests catch issues immediately
- Kaizen (continuous improvement): Every PR makes the codebase better
- Genchi Genbutsu (go and see): Benchmark claims, measure reality
This project is licensed under the MIT License - see the LICENSE file for details.
- Pragmatic AI Labs - https://github.com/paiml
- Inspired by NumPy, Eigen, and ndarray
- SIMD guidance from
std::archdocumentation - GPU compute via
wgpuproject - Quality standards from Toyota Production System
- PMAT quality gates by Pragmatic AI Labs
If you use Trueno in academic work, please cite:
@software{trueno2025,
title = {Trueno: Multi-Target High-Performance Compute Library},
author = {Pragmatic AI Labs},
year = {2025},
url = {https://github.com/paiml/trueno}
}- Issues: GitHub Issues
- Email: contact@paiml.com
Built with EXTREME TDD and Toyota Way principles 🚗⚡
