Skip to content

A comprehensive collection of deep learning algorithms and architectures for various AI applications

License

Notifications You must be signed in to change notification settings

Qifei-C/deep-learning-algorithms-toolkit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Deep Learning Algorithms Toolkit

Python 3.8+ PyTorch License: MIT

Comprehensive collection of deep learning algorithms and architectures including Variational Autoencoders, RNN text generation, adversarial robustness, and optimization methods. Professional implementations with research-grade quality and educational value.

Features

  • Generative Models: Variational Autoencoders (VAE) framework
  • Sequence Models: RNN/LSTM text generation systems
  • Adversarial Robustness: Defense against adversarial attacks
  • Optimization: Advanced optimization function visualization
  • Modular Design: Reusable components and clean APIs
  • Research Ready: Publication-quality implementations

Quick Start

from src.variational_autoencoder import VAE
from src.rnn_text_generator import RNNTextGenerator

# Train a Variational Autoencoder
vae = VAE(input_dim=784, latent_dim=32)
vae.train(train_loader, epochs=100)

# Generate new samples
generated_samples = vae.generate(num_samples=64)

# Text generation with RNN
text_generator = RNNTextGenerator()
text_generator.load_data('data/shakespeare.txt')
text_generator.train(epochs=50)

# Generate new text
generated_text = text_generator.generate("To be or not to be", length=100)

🧠 Algorithm Collection

1. Variational Autoencoder (VAE)

  • Location: src/variational_autoencoder/
  • Features: Probabilistic encoder-decoder with KL divergence
  • Applications: Image generation, dimensionality reduction, anomaly detection

2. RNN Text Generation

  • Location: src/rnn_text_generation/
  • Features: Character and word-level text generation
  • Applications: Creative writing, language modeling, style transfer

3. Adversarial Robustness

  • Location: src/adversarial_robustness/
  • Features: FGSM, PGD attacks and defense mechanisms
  • Applications: Model security, robustness evaluation

4. Optimization Visualization

  • Location: src/optimization_functions/
  • Features: 2D/3D optimization landscape visualization
  • Applications: Algorithm comparison, education, research

5. Logistic Regression Optimization

  • Location: src/logistic_regression_optimization/
  • Features: Advanced optimization for logistic regression
  • Applications: Binary classification, optimization benchmarks

πŸ“ Project Structure

deep-learning-algorithms-toolkit/
β”œβ”€β”€ src/                              # Source algorithms
β”‚   β”œβ”€β”€ variational_autoencoder/      # VAE implementation
β”‚   β”œβ”€β”€ rnn_text_generation/          # Text generation
β”‚   β”œβ”€β”€ adversarial_robustness/       # Security methods
β”‚   β”œβ”€β”€ optimization_functions/       # Optimization tools
β”‚   └── logistic_regression_optimization/ # LR optimization
β”œβ”€β”€ examples/                         # Usage examples  
β”œβ”€β”€ tests/                           # Test suite
β”œβ”€β”€ docs/                            # Documentation
β”œβ”€β”€ data/                            # Sample datasets
β”œβ”€β”€ models/                          # Pre-trained models
└── README.md                       # This file

Algorithm Details

Variational Autoencoder

# Configure VAE architecture
vae_config = {
    'input_dim': 784,      # MNIST images
    'hidden_dims': [512, 256],
    'latent_dim': 32,
    'beta': 1.0            # KL divergence weight
}

vae = VAE(**vae_config)

# Training with custom loss
loss_history = vae.train(
    train_loader=mnist_loader,
    epochs=100,
    learning_rate=1e-3,
    beta_schedule='constant'  # or 'annealing'
)

RNN Text Generation

# Character-level text generation
rnn_config = {
    'vocab_size': 128,
    'hidden_size': 512,
    'num_layers': 3,
    'dropout': 0.2,
    'temperature': 0.8
}

generator = RNNTextGenerator(**rnn_config)
generator.train_on_text('data/shakespeare.txt', epochs=50)

# Generate with different creativity levels
conservative_text = generator.generate("Hello", temperature=0.5)
creative_text = generator.generate("Hello", temperature=1.2)

Adversarial Robustness

# Test model robustness
robustness_eval = AdversarialEvaluator(model)

# FGSM attack
fgsm_accuracy = robustness_eval.fgsm_attack(
    test_loader=test_data,
    epsilon=0.3
)

# PGD attack  
pgd_accuracy = robustness_eval.pgd_attack(
    test_loader=test_data,
    epsilon=0.3,
    num_steps=20,
    step_size=0.01
)

# Adversarial training for defense
robust_model = robustness_eval.adversarial_training(
    train_loader=train_data,
    attack_method='pgd',
    epochs=50
)

πŸ“Š Performance Benchmarks

Performance metrics will vary based on your specific dataset, model configuration, and hardware setup. Each algorithm is designed to achieve competitive results when properly tuned for your use case.

🎨 Visualization Features

Latent Space Exploration

# Visualize VAE latent space
vae.plot_latent_space(test_data, save_path='latent_space.png')

# Interpolation between samples
interpolation = vae.interpolate(sample1, sample2, steps=10)
vae.save_interpolation_gif(interpolation, 'interpolation.gif')

Optimization Landscapes

# Visualize optimization functions
optimizer_viz = OptimizationVisualizer()

# 2D landscape
optimizer_viz.plot_2d_function('rosenbrock', range_x=(-2, 2), range_y=(-1, 3))

# Optimization path
path = optimizer_viz.optimize_with_history('rastrigin', method='adam')
optimizer_viz.plot_optimization_path(path)

Custom Training Loops

# Custom VAE training with logging
trainer = VAETrainer(model=vae, config=training_config)
trainer.add_callback('tensorboard', log_dir='logs/')
trainer.add_callback('model_checkpoint', save_dir='checkpoints/')

history = trainer.train(
    train_loader=train_data,
    val_loader=val_data,
    epochs=100
)

Hyperparameter Optimization

# Automated hyperparameter search
from src.utils import HyperparameterOptimizer

optimizer = HyperparameterOptimizer(
    model_class=VAE,
    search_space={
        'latent_dim': [16, 32, 64],
        'learning_rate': [1e-4, 1e-3, 1e-2],
        'beta': [0.5, 1.0, 2.0]
    }
)

best_params = optimizer.search(train_data, val_data, trials=50)

References

Implementation based on seminal papers:

  • VAE: Kingma & Welling (2014)
  • Adversarial Examples: Goodfellow et al. (2015)
  • LSTM: Hochreiter & Schmidhuber (1997)

About

A comprehensive collection of deep learning algorithms and architectures for various AI applications

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages