Skip to content

Koratahiu/Advanced_Optimizers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

59 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Advanced Optimizers (AIO)

A comprehensive, all-in-one collection of optimization algorithms for deep learning, designed for maximum efficiency, minimal memory footprint, and superior performance across diverse model architectures and training scenarios.

PyPI


πŸ“¦ Installation

pip install adv_optm

🧠 Core Innovations

This library integrates multiple state-of-the-art optimization techniques validated through extensive research and practical training, with 1-bit compression for optimizer states:

Memory-Efficient Optimization (SMMF-inspired)

  • Paper: SMMF: Square-Matricized Momentum Factorization
  • Approach: Uses rank-1 non-negative matrix factorization with reconstruction cycle (factor β†’ reconstruct β†’ update β†’ factor)
  • Innovation:
    • First moment split into 1-bit sign + absolute value
    • Final storage: four factored vectors + one 1-bit sign state
    • Preserves Adam-like update quality with drastically reduced memory

⚑ Performance Characteristics

Memory Efficiency (SDXL Model – 6.5GB)

Optimizer Memory Usage Description
Adopt_Factored 328 MB 4 small vectors + 1-bit state
Adopt_Factored + AdEMAMix 625 MB 6 small vectors + two 1-bit states
Simplified_AdEMAMix 328 MB Same as standard factored (no extra state)

Speed Comparison (SDXL, Batch Size 4)

Optimizer Speed Notes
Adafactor ~8.5s/it Baseline
Adopt_Factored ~10s/it +18% overhead from compression
Adopt_Factored + AdEMAMix ~12s/it +41% overhead (3 factored states)

πŸ§ͺ Available Optimizers

Standard Optimizers (All support factored=True/False)

Optimizer Description Best For
Adam_Adv Advanced Adam implementation General purpose
Adopt_Adv Adam-variant with independent beta2 Stable training for small batch size regimes
Prodigy_Adv Prodigy with D-Adaptation Adam with automatic LR tuning
Simplified_AdEMAMix Adam variant with accumulator momentum Small/large batch training when tuned correctly
Lion_Adv Advanced Lion implementation Memory-constrained environments
Prodigy_Lion_Adv Prodigy + Lion combination Lion with automatic LR tuning

βš™οΈ Feature Matrix

Feature Adam_Adv Adopt_Adv Prodigy_Adv Simplified_AdEMAMix Lion_Adv
Factored βœ“ βœ“ βœ“ βœ“ βœ“
AdEMAMix βœ“ βœ“ βœ“ βœ— βœ—
Simplified_AdEMAMix βœ— βœ“ βœ“ βœ“ βœ—
OrthoGrad βœ“ βœ“ βœ“ βœ“ βœ“
Grams βœ“ βœ“ βœ“ βœ— βœ—
Cautious βœ“ βœ“ βœ“ βœ— βœ“
atan2 βœ“ βœ“ βœ“ βœ— βœ—
Stochastic Rounding βœ“ βœ“ βœ“ βœ“ βœ“
Fused Backward Pass βœ“ βœ“ βœ“ βœ“ βœ“
Kourkoutas-Ξ² βœ“ βœ“ βœ“ βœ“ βœ—

πŸ› οΈ Comprehensive Feature Guide

A. Universal Safe Features

These features work with all optimizers and are generally safe to enable.

Feature Description Recommended Usage Performance Impact Theoretical Basis Compatibility
Fused Back Pass Fuses backward pass; gradients used immediately and memory freed on-the-fly Memory-constrained environments Reduces peak memory Memory optimization All optimizers
Stochastic Rounding Replaces nearest rounding with stochastic rounding to preserve small gradient updates in BF16 BF16 training Minimal overhead (<5%) Revisiting BFloat16 Training All optimizers
OrthoGrad Removes gradient component parallel to weights to reduce overfitting Full fine-tuning without weight decay +33% time overhead (BS=4); less at larger BS Grokking at Edge All optimizers
Factored Memory-efficient optimization via rank-1 1-bit factorization of optimizer states Large models / memory-limited hardware Adds compression overhead SMMF All optimizers

B. Individual Features

Feature Description Recommended Usage Performance Impact Theoretical Basis Compatibility
Cautious Only applies update if gradient direction aligns with momentum direction Accelerating convergence No overhead C-Optim Adam/Adopt/Prodigy/Lion
Grams Update direction derived purely from current gradient When Cautious is insufficient No overhead Grams Adam/Adopt/Prodigy
AdEMAMix Dual EMA system that retains relevance of gradients over tens of thousands of steps Long training runs, especially where model forgetting is a concern +1 state memory AdEMAMix Adam/Adopt/Prodigy
Simplified_AdEMAMix Accumulator-based momentum, single EMA variant of AdEMAMix All scenarios when tuned correctly No overhead Connections Adam/Adopt/Prodigy
atan2 Robust epsilon replacement with built-in gradient clipping Use for stable bounded updates (or for Adopt as it needs that) No overhead Adam-atan2 Adam/Adopt/Prodigy
Kourkoutas-Ξ² Layer-wise adaptive Ξ²β‚‚ based on gradient β€œsunspike” ratio Noisy/small/large-batch/high-LR training No overhead Kourkoutas-Ξ² Adam/Adopt/Prodigy/Simplified_AdEMAMix

Note: If both Cautious and Grams are enabled, Grams takes precedence and Cautious is disabled.


πŸ” Feature Deep Dives

AdEMAMix

  • Adds a slow-decaying second EMA (beta3) that retains gradient memory over tens of thousands of steps.
  • Particularly effective for small batch sizes, where Adam’s standard first moment is nearly useless.

Tunable Hyperparameters

Parameter Default Tuning Guide
beta3 0.9999 β€’ Runs >120k steps: 0.9999
β€’ Runs ≀120k steps: 0.999
alpha 5 β€’ Reduce to 2–3 if diverging
β€’ Increase to strengthen long-term memory

βœ… Pro Tip: Set beta1=0 in Adam/Adopt/Prodigy to skip standard EMA entirely and rely solely on AdEMAMix’s slow EMA, ideal for small-batch regimes.


Simplified_AdEMAMix

Tunable Hyperparameters

Parameter Default Tuning Guide
beta1 0.99 Controls accumulator memory length:
β€’ Small BS: 0.99–0.9999
β€’ Large BS: 0.9
Grad Ξ± 100 Most critical parameter:
β€’ Inversely scales with batch size
β€’ 100–10 for small BS (≀32)
β€’ 1–0.1 for large BS (β‰₯512)

⚠️ Critical: Requires ~100x smaller learning rate than AdamW (e.g., 1e-6 vs 1e-4).
For Prodigy_Adv, set initial_d to:

  • LoRA: 1e-8
  • Full FT: 1e-10
  • Embedding: 1e-7

⚠️ Incompatible with: Cautious, Grams, atan2, and standard update clipping.


atan2

  • Replaces eps in Adam-family optimizers with a scale-invariant, bounded update rule.
  • Automatically clips updates to [-2, 2], preventing destabilizing jumps.
  • Highly recommended for Adopt_Adv, which is prone to instability without clipping.

πŸ“š Reference:


Kourkoutas-Ξ²

Kourkoutas-Ξ² introduces a sunspike-driven, layer-wise adaptive second-moment decay (Ξ²β‚‚) as an optional enhancement for Adam_Adv, Adopt_Adv, Prodigy_Adv, and Simplified_AdEMAMix.

Instead of using a fixed Ξ²β‚‚ (e.g., 0.999 or 0.95), it dynamically modulates Ξ²β‚‚ per layer based on a bounded sunspike ratio:

  • During gradient bursts β†’ Ξ²β‚‚ ↓ toward Lower Ξ²β‚‚ β†’ faster reaction
  • During calm phases β†’ Ξ²β‚‚ ↑ toward The Selected Ξ²β‚‚ β†’ stronger smoothing

This is especially effective for noisy training, small batch sizes, and high learning rates, where gradient norms shift abruptly due to noise or aggressive LR schedules.

Pros/Cons

Category Details
βœ… Pros β€’ Layer-wise adaptation blends benefits of high Ξ²β‚‚ (strong smoothing) and low Ξ²β‚‚ (fast reaction).
β€’ Robust to sudden loss landscape shifts, reacts quickly during gradient bursts, smooths during calm phases.
β€’ High tolerance to aggressive learning rates.
⚠️ Cons β€’ Potentially unstable at the start of training due to unreliable early gradient norms; mitigated by using K-Ξ² Warmup Steps.

πŸ’‘ Best Practice: Set K_warmup_steps equal to your standard LR warmup steps. During warmup, the optimizer uses the static beta2; adaptation begins only after warmup ends.

πŸ“š Reference:


πŸ“š References

  1. Revisiting BFloat16 Training
  2. SMMF: Square-Matricized Momentum Factorization
  3. The AdEMAMix Optimizer
  4. Connections between Schedule-Free Optimizers, AdEMAMix, and Accelerated SGD
  5. Kourkoutas-Ξ²: A Sunspike-Driven Adam Optimizer with Desert Flair
  6. Scaling Exponents Across Parameterizations and Optimizers

About

A family of highly efficient, lightweight yet powerful optimizers.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages