Description
optimath and simd may incrementally boost your current linalg module performance
but its still O(n^3) code
if youre willing to venture beyond linear algebra and scalar calculus https://crates.io/crates/geonum achieves O(1) operations regardless of dimension
it consumes minimal memory since multivectors are represented with just 2 components (length and angle) instead of the 2^n components required by traditional geometric algebra
geonums machine_learning_test.rs suite demonstrates:
- perceptron classification in 50,000D space with O(1) vs O(n) complexity
- linear regression without expensive gram matrix computation
- neural networks with O(1) forward/backward passes vs O(n²) matrix multiplications
- clustering with O(1) distance calculations vs O(n) euclidean distances
- dimensionality reduction without O(n³) eigendecomposition
at just 16 dimensions, geonum is 4300× faster than tensor implementation and maintains consistent ~78ns performance even in million-dimensional spaces
so while optimath/simd provides instruction-level parallelism for existing operations, geonum swaps them out with a more scalable design
encoding orthogonality relationships directly with angles instead of computing them repeatedly eliminates the computational bottleneck entirely (orders of magnitude improvement beyond whats possible with traditional optimizations)
try it out: https://github.com/mxfactorial/geonum/tree/develop?tab=readme-ov-file#learn-with-ai
Originally posted by @mxfactorial in #292