This project implements a deep residual autoencoder for unsupervised reconstruction of spectral data using PyTorch. The model is optimized using a composite loss that combines MSE, cosine similarity, and spectral smoothness. Visual diagnostics include spectral plots and UMAP latent space projection.
- Residual skip connections for better learning
- Composite loss (MSE + Cosine + Smoothness)
- Mixed-Precision training with AMP (if CUDA available)
- Early stopping & OneCycleLR scheduler
- Spectral reconstruction and residual visualization
- Latent space exploration with UMAP
- Input: 3D spectral dataset (
(samples, 61 wavelengths, 4 components)) - Format:
.datfile loaded via NumPy
Input → [512 → 256] → Latent (128-d) → [256 → 512] → Output (+ Residual)
- Activations: SELU
- Normalization: LayerNorm
- Regularization: Dropout (0.2)
- Loss:
α*MSE + β*Cosine + γ*Smoothness
pip install torch numpy scikit-learn matplotlib umap-learnpython train.pyWhere train.py contains:
- Model definition
- Data loading
- Training loop
- Visualizations
- MSE loss vs epoch
- Validation tracking with early stopping
- Original vs Reconstructed vs Residuals
- Per component (4 channels)
- 2D UMAP embedding from the 128-d latent vector
best_deep_res_autoencoder.pth: Saved model with best validation loss
MIT License © 2025
Dharmik Dudhat
Built using PyTorch + NumPy + Matplotlib
Feel free to ⭐ this repo and contribute!