STLAR (or Stellar) is a comprehensive Spatio-Temporal LFP analysis tool combining temporal HFO detection (hfoGUI), spatial spectral mapping, and deep learning classification capabilities.
- Preferred Python: 3.12 (Conda recommended)
- Create environment and install dependencies:
# Create and activate environment
conda create -n stlar python=3.12
conda activate stlar
# Install tested, pinned dependencies
pip install -r requirements.txt- Run the GUI:
python -m stlar gui- Run a batch CLI example:
python -m stlar hilbert-batch -f path/to/data/- Troubleshooting tip (Windows): If you see DLL or
_ctypeserrors after changing Python versions, recreate the environment:
conda env remove -n stlar
conda create -n stlar python=3.12
conda activate stlar
pip install -r requirements.txt- HFO detection (ripples 80-250 Hz, fast ripples 250-500 Hz)
- Multi-band filtering and visualization
- Time-frequency analysis (Stockwell transform)
- Event scoring and annotation
- Multiple automated detection algorithms (Hilbert, STE, MNI, Consensus, Deep Learning)
- Frequency heatmaps across arena
- Position tracking overlay
- Spatial power distribution
- Arena coverage visualization
- Multi-channel spatial mapping
- Synchronized temporal-spatial views
- HFO location mapping
- Behavioral state-dependent analysis
- Cross-region coordination metrics
- Automated HFO classification with 1D CNN models
- Train custom models on your data with real-time GUI monitoring
- PyTorch (.pt) and ONNX export
- Pre-trained models available
- CLI tool to prepare training data with behavioral annotation and train/val splitting
- Multi-file batch CLI processing with 5 detection methods + training data prep
- Recursive directory scanning
- Configurable detection thresholds
- Progress tracking and summary statistics
- Python 3.12 (preferred) β 3.10β3.13 supported; some packages may have limited support on 3.13
- pip (Python package manager, usually included with Python)
- ~2-3 GB disk space for dependencies
If you don't have Python installed:
- Windows: Download from python.org β Run installer β β Check "Add Python to PATH"
- macOS: Use homebrew β
brew install python3 - Linux:
sudo apt-get install python3 python3-pip
Verify installation:
python --version
pip --version# Download STLAR source code
git clone https://github.com/HussainiLab/STLAR.git
cd STLARNo git installed? Download ZIP from GitHub β Extract β Open command prompt in the folder
Using a virtual environment keeps STLAR's dependencies isolated from your system Python.
Option A: Using Conda (Recommended for Data Science)
# Create new environment named "stlar"
conda create -n stlar python=3.12
# Activate the environment
conda activate stlar
# You should see (stlar) at the start of your command promptTo activate later: conda activate stlar
To deactivate: conda deactivate
Option B: Using venv (Built into Python)
# Windows
python -m venv stlar
stlar\Scripts\activate
# macOS/Linux
python3 -m venv stlar
source stlar/bin/activate
# You should see (stlar) at the start of your command promptTo activate later:
- Windows:
stlar\Scripts\activate - macOS/Linux:
source stlar/bin/activate
To deactivate: deactivate
Why use an environment?
- β Prevents dependency conflicts with other Python projects
- β
Easy to reset if something breaks (
conda env remove -n stlar) - β Reproducible setup across machines
# Windows/macOS/Linux - same command
pip install -r requirements.txtThe requirements.txt is pinned to tested versions for Python 3.12 to ensure reproducible installs.
Takes 2-5 minutes depending on internet speed. You should see packages downloading.
# Allows updating STLAR without reinstalling
pip install -e .Done! You can now run STLAR.
Install these only if you plan to use deep learning detection (dl-batch) or the GUI's DL features.
CPU-only (works everywhere):
pip install torch onnxruntimeGPU-accelerated (CUDA 11.8, NVIDIA GPUs):
# Activate your environment first (e.g., conda activate stlar)
pip install --index-url https://download.pytorch.org/whl/cu118 torch torchvision torchaudio
pip install onnxruntime-gpu # optional; fallback is onnxruntime (CPU)Notes:
- If
torchis not installed, runningpython -m stlar dl-batch ...will print guidance and exit gracefully. - ONNX is optional; STLAR supports TorchScript models out of the box.
- Verify GPU availability with:
python -c "import torch; print(torch.cuda.is_available())".
"Command not found: python"
- Try
python3instead ofpython - Windows: Add Python to PATH (search "environment variables" β add Python installation folder)
"Permission denied" (macOS/Linux)
pip install --user -r requirements.txtConda environment not activating
- Make sure conda is initialized:
conda initβ restart terminal - Check environment exists:
conda env list
ImportError when running STLAR
- Ensure environment is activated:
conda activate stlarorsource stlar/bin/activate - Ensure all dependencies installed:
pip install -r requirements.txt --upgrade - Check you're in the STLAR directory:
pwd(macOS/Linux) orcd(Windows)
STLAR offers three ways to analyze data. New users should start with the GUI (option 1).
No command-line knowledge needed! Everything is point-and-click.
Launch the HFO Analysis GUI:
python -m stlar guiYou'll see a window with buttons to:
- π Load your EEG/EGF file
- π Detect HFOs with different methods (Hilbert, Consensus, Deep Learning, etc.)
- π View detected events in a table
- π·οΈ Score events (label as ripples, artifacts, etc.)
- πΎ Save results to a text file
GUI Workflow (step-by-step):
- Import Set (required): In the main window click "Import Set" β pick a
.setfile or a folder containing it (use "Choose a Set file" or "Choose a Folder") β click Apply. The main window shows the chosen path and loads sources. - Select source & params: Click "Graph Settings". In the Graph Settings window pick the source (.eeg/.egf), set frequency bands and thresholds. Close/Hide to return.
- Open scoring/detection: Click "HFO Detection" to open the HFO Detection window. Go to the Automatic Detection tab.
- Run detection: Choose a method (Hilbert, STE, MNI, Consensus, DL), adjust parameters if needed, then click "Run Detection".
- Review detections: Still in Automatic Detection, sort/filter the detected EOIs; preview on the plot.
- Send to Score tab: Select EOIs to keep β click "Add Selected EOI(s) to Score".
- Label events: In the Score tab, assign labels (Ripple, Fast Ripple, Sharp Wave Ripple, Artifact). Optionally choose a Brain Region preset (LEC/Hippocampus/MEC/None).
- Save results: Click "Save Scores" to write the tab-separated scores file.
Tip: Use the "None" option in "Brain Region" dropdown if you don't want region-specific filtering.
Launch the Spatial Mapper GUI:
python -m stlar spatial-guiThis shows a heatmap of HFO activity across the recording arena with position tracking overlays. When launched from the GUI, EOIs from both Automatic Detection and Score tabs are passed automatically to spatial mapper.
Process multiple files automatically without the GUI. Good for processing dozens of files consistently.
Example: Process all files in a folder
python -m stlar hilbert-batch -f mydata/What this does:
- Finds all
.eegand.egffiles inmydata/(including subfolders) - Detects ripples using the Hilbert filter method
- Saves results to
HFOScores/automatically - Shows progress in the terminal
More examples:
Use consensus voting (more reliable but slower):
python -m stlar consensus-batch -f mydata/ -vProcess a single file:
python -m stlar hilbert-batch -f mydata/recording.eeg -o results/Show detailed progress:
python -m stlar hilbert-batch -f mydata/ -vOutput: Detected events saved as .txt files in HFOScores/
Use STLAR from your own Python scripts:
from hfoGUI.core.Detector import Detector
# Load your data
detector = Detector('mydata.eeg')
# Detect ripples
ripples = detector.detect_ripples(method='hilbert')
print(f"Found {len(ripples)} ripples")
# Detect fast ripples
fast_ripples = detector.detect_ripples(method='hilbert',
freq_min=250, freq_max=500)The HFO Detection window workflow (fast overview):
- Detect EOIs with Hilbert / STE / MNI / Consensus / Deep Learning in the Automatic Detection tab
- Select EOIs from the detection table and click "Add Selected EOI(s) to Score" to move them to the Score tab
- π‘ The "None" brain region option skips filtering if you don't want preset parameters
- Refine scores: add, relabel, hide, or delete scores manually
- Save results to a text file for further analysis
- (Optional) Export for DL training: Use "Export EOIs for DL Training" to create training data
- (Optional) Train custom model: Train a Deep Learning model on your labeled data
- (Optional) Run DL detection: Use your trained model on new recordings
- .eeg - Tint format (most common for spike sorting)
- .egf - Intan format (includes embedded position tracking)
- .edf - Standard EDF format (medical device recordings)
Don't have sample data? STLAR includes test data in tests/ directory.
This section provides a detailed walkthrough of the GUI-based analysis. If you're new to STLAR, start here.
python -m stlar guiYou should see the STLAR window open with several tabs.
- Click "File" menu β "Open"
- Navigate to your
.eegor.egffile - The signal will appear in the graph area
On the left side, you'll see parameter sliders:
- Freq Min / Freq Max: Frequency band to search (e.g., 80-250 Hz for ripples)
- Threshold (SD): Sensitivity (3-4 is typical, lower = more events)
- Min/Max Duration (ms): How long events must be (15-120 ms for ripples)
Tip: Don't overthink these! Start with defaults and adjust based on your results.
Click "Run Detection" on your chosen tab:
- Hilbert (fastest, good for exploration)
- STE (Stockwell transform, frequency-based)
- MNI (more conservative)
- Consensus (voting between methods - more reliable)
- Deep Learning (requires pre-trained model)
Wait for detection to complete. You'll see events appear in the Automatic Detection table.
In the Automatic Detection tab:
- Sort by Duration: Click the "Duration" column header to sort by event length
- Preview events: Click an event to highlight it in the signal graph
- Adjust threshold: If too many false positives, increase "Threshold (SD)"
- Select events: Click one event, then Ctrl+Click to select multiple
- Click "Add Selected EOI(s) to Score" button
- Events move to the Score tab for manual labeling
Important: Choose a Brain Region (LEC, Hippocampus, MEC, or None) before adding:
- If you select a region, STLAR applies preset filters (duration, behavior)
- If you select "None", events are added without any filtering
For each event in the Score tab:
- Select the event
- Choose a label: Ripple, Fast Ripple, Sharp Wave Ripple, or Artifact
- Set Scorer name and Brain Region (if not set earlier)
Keyboard shortcuts:
- R = Ripple
- F = Fast Ripple
- S = Sharp Wave Ripple
- A = Artifact
- Delete = Remove selected event
- Click "Save Scores"
- Choose a location for the
.txtfile - File is saved and ready for analysis or spatial mapping
New in STLAR: Easy-to-use DL training from the GUI!
- Go to "Deep Learning" tab β "Train"
- Select your training and validation manifests (from
prepare-dlcommand) - Select a model architecture:
- 1D Models (1-4): Simple CNN, ResNet1D (default), InceptionTime, Transformer
- 2D Models (5-6): Spectrogram CNN, CWT CNN (Scalogram-based)
- (Optional) Enable "Use CWT (Scalogram) Preprocessing" checkbox:
- Converts 1D raw segments to 2D CWT scalograms
- Typically used with 2D model types (5-6) for best results
- Not compatible with 1D model types (1-4)
- Adjust parameters (epochs, learning rate, batch size) - see defaults first!
- Click "Start Training"
- Watch training progress in real-time (optional GUI monitor)
Training parameters saved automatically to training_params.json with recommendations for next iteration:
- If overfitting detected: suggests increasing regularization
- If loss plateaus: suggests reducing learning rate
- If training unstable: suggests smaller batch size
- Go to "Deep Learning" tab β "Export"
- Select your best checkpoint (
best.pt) - Choose output directory
- Click "Export" to create TorchScript (.pt) and ONNX formats
Then use the exported model for detection with the CLI or GUI.
To verify CWT preprocessing during GUI-based detection, set an environment variable before launching:
Windows (PowerShell):
$env:STLAR_DEBUG_CWT = "debug_scalograms"
python -m stlar guiLinux/macOS:
export STLAR_DEBUG_CWT="debug_scalograms"
python -m stlar guiWhen you run DL detection with a CWT model, scalogram images will be saved to the specified directory for inspection. Leave the environment variable unset for normal operation (no images saved).
- Banner:
docs/images/banner.pngβ Composite of HFO GUI and Spatial Mapper. - HFO GUI:
docs/images/hfo_gui_annotated.pngβ Signal view, detected events, parameters, event list. - Spatial Mapper:
docs/images/spatial_mapper_annotated.pngβ Heatmap, tracking overlay, power scale, controls. - CLI Batch:
docs/images/cli_batch_processing.pngβ Terminal runningpython -m stlar hilbert-batch -f data/ -vwith progress. - Output Example:
docs/images/output_example.pngβ Signal trace with detected HFO spans and IDs. - Methods Comparison (optional):
docs/images/methods_comparison.pngβ Same segment with 5 methods. - Heatmaps (optional):
docs/images/grid_heatmap.png,docs/images/polar_heatmap.pngβ Grid vs. polar binning.
STLAR/
βββ hfoGUI/ # HFO detection (temporal analysis)
βββ spatial_mapper/ # Spatial spectral mapping
βββ stlar/ # Main command-line dispatcher
βββ docs/ # Documentation & guides
β βββ TECHNICAL_REFERENCE.md # Algorithms & formulas (for scientists)
β βββ CONSENSUS_DETECTION.md # Consensus voting details
β βββ CONSENSUS_QUICKSTART.md # Quick guide
β βββ CONSENSUS_SUMMARY.md # Summary table
βββ tests/ # Unit tests
βββ settings/ # User config (auto-created)
βββ HFOScores/ # Output directory (auto-created)
βββ requirements.txt # Dependencies
The command-line interface (CLI) allows batch processing of multiple files with consistent parameters. All commands use the format:
python -m stlar <command> [options]Supported commands:
- HFO Detection:
hilbert-batch,ste-batch,mni-batch,consensus-batch,dl-batch - Analysis & Export:
metrics-batch(compute HFO metrics),filter-scores(filter by duration) - DL Training Data:
prepare-dl,train-dl,export-dl - Spatial Mapping:
batch-ssm
Key features:
- β Single-file or directory (recursive) processing
- β Auto-detects .eeg and .egf files
- β
Progress tracking with
-v(verbose) flag - β
Customizable output directory with
-o - β Reproducible with saved settings JSON files
Each detection creates files in the output directory:
HFOScores/
βββ recording_name/
β βββ recording_name_HIL.txt # Detected HFOs (tab-separated)
β βββ recording_name_HIL_settings.json # Settings used (for reproducibility)
β βββ recording_name_HFO_scores.eoi # EOI format (for Tint)
β βββ ...
Output file format:
ID# Start(ms) Stop(ms) Peak(Β΅V) Duration(ms)
HIL1 1234.56 1245.67 125.3 11.11
HIL2 2345.67 2356.78 118.9 11.11
...
Common options across all commands:
-f, --file- Input file or directory (required)-o, --output- Where to save results (default:HFOScores/)-v, --verbose- Show progress details-s, --set-file- Location of .set files for scaling calibration
Envelope-based detection using analytic signal (Hilbert transform).
Command:
python -m stlar hilbert-batch -f <data_file_or_directory> [options]Examples:
Single file:
python -m stlar hilbert-batch \
-f data/recording.eeg \
--threshold-sd 3.0 \
--min-freq 80 \
--max-freq 250 \
--epoch-sec 300 \
-vDirectory batch with custom output:
python -m stlar hilbert-batch \
-f /data/recording_session/ \
-s /data/recording_session/ \
-o results/hilbert_detections/ \
--threshold-sd 2.5 \
--required-peaks 5 \
-vParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
-f, --file |
str | required | Path to .eeg/.egf file or directory |
-s, --set-file |
str | auto-detect | .set file or directory (for scaling calibration) |
-o, --output |
str | HFOScores/ | Output directory for results |
--epoch-sec |
float | 300 | Epoch length in seconds for analysis |
--threshold-sd |
float | 3.0 | Envelope threshold in SD above mean |
--min-duration-ms |
float | 10.0 | Minimum event duration (ms) |
--min-freq |
float | 80 | Minimum bandpass frequency (Hz) |
--max-freq |
float | 125 (EEG) / 500 (EGF) | Maximum bandpass frequency (Hz) |
--required-peaks |
int | 4 | Minimum peaks in rectified signal |
--required-peak-threshold-sd |
float | 5.0 | Peak threshold SD above mean |
--no-required-peak-threshold |
flag | off | Disable peak-threshold check |
--boundary-percent |
float | 30.0 | Percent of threshold to find boundaries |
--skip-bits2uv |
flag | off | Skip bits-to-uV conversion if .set missing |
-v, --verbose |
flag | off | Verbose progress logging |
Fast detection based on RMS energy in sliding windows.
Command:
python -m stlar ste-batch -f <data_file_or_directory> [options]Examples:
python -m stlar ste-batch \
-f data/recording.eeg \
--threshold 3.0 \
--window-size 0.01 \
--overlap 0.5 \
--min-freq 80 \
--max-freq 250Directory batch:
python -m stlar ste-batch \
-f /data/recordings/ \
-o results/ste_detections/ \
--threshold 2.5 \
--window-size 0.01 \
--overlap 0.75 \
-vParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
-f, --file |
str | required | Path to .eeg/.egf file or directory |
-s, --set-file |
str | auto-detect | .set file or directory |
-o, --output |
str | HFOScores/ | Output directory |
--threshold |
float | 3.0 | RMS threshold (SD or absolute value) |
--window-size |
float | 0.01 | Window size in seconds |
--overlap |
float | 0.5 | Window overlap fraction (0-1) |
--min-freq |
float | 80 | Minimum frequency (Hz) |
--max-freq |
float | 500 | Maximum frequency (Hz) |
--skip-bits2uv |
flag | off | Skip scaling conversion |
-v, --verbose |
flag | off | Verbose logging |
Percentile-based detection using baseline power statistics.
Command:
python -m stlar mni-batch -f <data_file_or_directory> [options]Examples:
python -m stlar mni-batch \
-f data/recording.eeg \
--baseline-window 10.0 \
--threshold-percentile 99.0 \
--min-freq 80Directory batch:
python -m stlar mni-batch \
-f /data/recordings/ \
-o results/mni_detections/ \
--baseline-window 15.0 \
--threshold-percentile 98.5 \
-vParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
-f, --file |
str | required | Path to .eeg/.egf file or directory |
-s, --set-file |
str | auto-detect | .set file or directory |
-o, --output |
str | HFOScores/ | Output directory |
--baseline-window |
float | 10.0 | Baseline window in seconds |
--threshold-percentile |
float | 99.0 | Threshold percentile (0-100) |
--min-freq |
float | 80 | Minimum frequency (Hz) |
--skip-bits2uv |
flag | off | Skip scaling conversion |
-v, --verbose |
flag | off | Verbose logging |
Combines Hilbert, STE, and MNI detections using configurable voting strategy.
Command:
python -m stlar consensus-batch -f <data_file_or_directory> [options]Examples:
Basic consensus (majority voting):
python -m stlar consensus-batch \
-f data/recording.eeg \
--voting-strategy majority \
--overlap-threshold-ms 10.0Strict consensus (all 3 methods must agree):
python -m stlar consensus-batch \
-f /data/recordings/ \
-o results/consensus_detections/ \
--voting-strategy strict \
--overlap-threshold-ms 5.0 \
--hilbert-threshold-sd 3.5 \
--ste-threshold 2.5 \
--mni-percentile 98.0 \
-vLenient consensus (any method detection):
python -m stlar consensus-batch \
-f data/recording.eeg \
--voting-strategy any \
--overlap-threshold-ms 15.0Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
-f, --file |
str | required | Path to .eeg/.egf file or directory |
-s, --set-file |
str | auto-detect | .set file or directory |
-o, --output |
str | HFOScores/ | Output directory |
--voting-strategy |
str | majority | strict (3/3), majority (2/3), or any (1/3) |
--overlap-threshold-ms |
float | 25.0 | Time window (ms) for overlapping detections |
--epoch-sec |
float | 300 | Hilbert epoch length (seconds) |
--hilbert-threshold-sd |
float | 3.5 | Hilbert envelope threshold (SD) |
--ste-threshold |
float | 2.5 | STE/RMS threshold |
--mni-percentile |
float | 98.0 | MNI threshold percentile |
--min-duration-ms |
float | 10.0 | Minimum event duration (ms) |
--min-freq |
float | 80 | Minimum frequency (Hz) |
--max-freq |
float | 125 (EEG) / 500 (EGF) | Maximum frequency (Hz) |
--required-peaks |
int | 4 | All detectors minimum peaks |
--required-peak-sd |
float | 5.0 | All detectors peak threshold (SD) |
--skip-bits2uv |
flag | off | Skip scaling conversion |
-v, --verbose |
flag | off | Verbose logging |
Uses a pre-trained or custom neural network model for detection.
Command:
python -m stlar dl-batch -f <data_file_or_directory> --model-path <model> [options]Examples:
python -m stlar dl-batch \
-f data/recording.eeg \
--model-path models/hfo_detector.pt \
--threshold 0.5 \
--batch-size 32Directory batch with custom threshold:
python -m stlar dl-batch \
-f /data/recordings/ \
-o results/dl_detections/ \
--model-path models/hfo_detector.pt \
--threshold 0.7 \
--batch-size 64 \
-vParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
-f, --file |
str | required | Path to .eeg/.egf file or directory |
-s, --set-file |
str | auto-detect | .set file or directory |
-o, --output |
str | HFOScores/ | Output directory |
--model-path |
str | required | Path to trained model (.pt or .onnx) |
--threshold |
float | 0.5 | Detection probability threshold (0-1) |
--batch-size |
int | 32 | Inference batch size |
--dump-probs |
flag | off | Print per-window probability stats + assessment (sanity-check model spread) |
--skip-bits2uv |
flag | off | Skip scaling conversion |
-v, --verbose |
flag | off | Verbose logging |
Tip: add --dump-probs to quickly see min/max/mean and percentile spread of DL probabilities; very narrow spreads usually mean the model needs more epochs or better labels.
Example 1: Quick single-file screening
# Fast STE detection with verbose output
python -m stlar ste-batch \
-f data/session_1.eeg \
--threshold 2.5 \
-vExample 2: Batch directory with Hilbert (default settings)
# Process entire directory, save to HFOScores/
python -m stlar hilbert-batch \
-f /data/rat_session/ \
-s /data/rat_session/ \
-vExample 3: High-confidence consensus detection
# Strict consensus voting across directory
python -m stlar consensus-batch \
-f /data/recordings/ \
-o /results/strict_consensus/ \
--voting-strategy strict \
--overlap-threshold-ms 5.0 \
--hilbert-threshold-sd 3.5 \
--ste-threshold 3.0 \
--mni-percentile 99.0 \
-vExample 4: Deep learning on pre-processed files
# Use trained model on directory of files
python -m stlar dl-batch \
-f /data/preprocessed/ \
-o /results/dl_predictions/ \
--model-path /models/my_trained_detector.pt \
--threshold 0.6 \
--batch-size 128 \
-vAfter batch processing completes, you'll see a summary:
============================================================
BATCH PROCESSING SUMMARY
============================================================
Total files found: 5
Successfully processed: 5
Failed: 0
Total HFOs detected: 1247
Average per file: 249.4
============================================================
Output files:
- Scores:
<session>_<METHOD>.txt(tab-delimited, importable into Excel/analysis software) - Settings:
<session>_<METHOD>_settings.json(parameters used for reproducibility)
"No .set file found"
- Use
--skip-bits2uvto process without scaling calibration - Or provide
--set-filewith the directory containing .set files
"No .eeg or .egf files found in directory"
- Verify file extensions are lowercase (.eeg, .egf)
- Check directory path is correct
- Use
-vflag to see what files are discovered
Sensitivity too high/low
- Too many false positives: Increase threshold (e.g.,
--threshold-sd 4.0for Hilbert) - Too many false negatives: Decrease threshold (e.g.,
--threshold-sd 2.0for Hilbert) - Try consensus voting with different methods to find balanced detections
Batch compute HFO metrics (event count, rate, durations, etc.) from existing score files generated by any detection method.
Command:
python -m stlar metrics-batch -f <scores_file_or_directory> [options]Examples:
Single scores file with fallback duration:
python -m stlar metrics-batch \
-f HFOScores/recording_name/recording_name_HIL.txt \
--duration-min 30 \
-vDirectory with data files (auto-detect duration):
python -m stlar metrics-batch \
-f HFOScores/ \
--data /path/to/data/dir/ \
-o results/metrics/ \
-vWith region preset, band filter, gating, and save-filtered TSV:
python -m stlar metrics-batch \
-f HFOScores/recording_HIL.txt \
--preset Hippocampus \
--band ripple \
--behavior-gating \
--speed-max 4.0 \
--save-filtered \
--duration-min 30 \
-vDirectory batch with custom presets file and speed override:
python -m stlar metrics-batch \
-f HFOScores/ \
--preset LEC \
--preset-file my_presets.json \
--band "ripple,fast" \
--behavior-gating \
--speed-min 0.5 \
--speed-max 3.0 \
--save-filtered \
-o results/metrics/Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
-f, --scores |
str | required | Path to scores file (.txt) or directory containing scores files |
--data |
str | optional | Directory containing data files (.eeg/.egf) for duration inference |
--duration-min |
float | optional | Fallback recording duration in minutes (if data files not found) |
--preset |
str | optional | Region preset name (LEC, Hippocampus, MEC; extendable via preset file) |
--preset-file |
str | optional | JSON file to override/extend presets (dict keyed by region names) |
--band |
str | optional | Comma-separated band/label filters (matches label/score column) |
--behavior-gating |
flag | off | Apply speed gating if a speed column exists in scores |
--speed-min |
float | optional | Override min speed for gating (cm/s) |
--speed-max |
float | optional | Override max speed for gating (cm/s) |
--save-filtered |
flag | off | Save preset-filtered scores to <output>/filtered_scores/ |
-o, --output |
str | scores parent | Output directory for metrics CSV files |
-v, --verbose |
flag | off | Verbose progress logging |
Output format:
<session_name>_hfo_metrics.csv (comma-separated key-value pairs):
metric,value
total_events,247
recording_duration_minutes,30.0
event_rate_per_min,8.23
mean_duration_ms,25.3
median_duration_ms,23.1
min_duration_ms,10.0
max_duration_ms,85.5
std_duration_ms,12.7
Metrics computed:
total_events: Number of HFO events detectedrecording_duration_minutes: Length of recording in minutesevent_rate_per_min: Events per minute (count / duration)mean_duration_ms: Average HFO duration in millisecondsmedian_duration_ms: Median HFO durationmin_duration_ms: Shortest HFO eventmax_duration_ms: Longest HFO eventstd_duration_ms: Standard deviation of durations
Filter existing score files by event duration to remove short noise bursts or long artifacts.
Command:
python -m stlar filter-scores -f <scores_file> [options]Examples:
Remove short events (< 15 ms) and long artifacts (> 150 ms):
python -m stlar filter-scores \
-f HFOScores/recording_name/recording_name_HIL.txt \
--min-duration-ms 15 \
--max-duration-ms 150 \
-o HFOScores/filtered/ \
-vKeep only typical ripple-range events (15-120 ms):
python -m stlar filter-scores \
-f scores.txt \
--min-duration-ms 15 \
--max-duration-ms 120Apply region preset with band filter and behavior gating:
python -m stlar filter-scores \
-f scores.txt \
--preset LEC \
--band ripple,fast \
--behavior-gating \
--speed-max 4.0 \
-vBand filtering only with custom speed thresholds (no preset):
python -m stlar filter-scores \
-f scores.txt \
--band ripple \
--behavior-gating \
--speed-min 0.5 \
--speed-max 2.5 \
--min-duration-ms 15 \
--max-duration-ms 120Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
-f, --scores |
str | required | Path to scores file to filter |
--min-duration-ms |
float | optional | Minimum event duration in milliseconds |
--max-duration-ms |
float | optional | Maximum event duration in milliseconds |
--preset |
str | optional | Region preset name (LEC, Hippocampus, MEC; extendable via preset file) |
--preset-file |
str | optional | JSON file to override/extend presets (dict keyed by region names) |
--band |
str | optional | Comma-separated band/label filters (matches label/score column) |
--behavior-gating |
flag | off | Apply speed gating if a speed column exists in scores |
--speed-min |
float | optional | Override min speed for gating (cm/s) |
--speed-max |
float | optional | Override max speed for gating (cm/s) |
-o, --output |
str | scores parent | Output directory for filtered scores |
-v, --verbose |
flag | off | Verbose progress logging |
Output format:
<scores_stem>_filtered.txt (same tab-separated format as input):
ID# Start Time(ms) Stop Time(ms) Settings File
1 1234.56 1250.12 hilbert_params.json
2 2345.67 2360.23 hilbert_params.json
...
Typical use cases:
- Remove instrumental noise (very short bursts < 10 ms)
- Exclude motion artifacts (very long events > 200 ms)
- Focus on ripple range (15-120 ms) for further analysis
- Region-specific filtering (different thresholds for different brain areas)
Preset file format: Provide a JSON dict keyed by region name. Example:
{
"LEC": {"bands": {"ripple": [80, 250]}, "durations": {"ripple_min_ms": 15, "ripple_max_ms": 120}, "behavior_gating": true, "speed_threshold_min_cm_s": 0.0, "speed_threshold_max_cm_s": 5.0},
"Hippocampus": {"bands": {"ripple": [100, 250]}, "durations": {"ripple_min_ms": 15, "ripple_max_ms": 120}}
}Defaults match the GUI (LEC, Hippocampus, MEC) and are extended/overridden by any file you pass via --preset-file.
The batch-ssm command performs batch spatial spectral analysis on .egf files with animal tracking data. It computes power spectral density (PSD) across spatial positions and optionally exports binned analyses.
# Single file
python -m stlar batch-ssm data/session001.egf --ppm 595
# Directory batch processing
python -m stlar batch-ssm data/ --ppm 595 --chunk-size 60
# With binned exports (4Γ4 grid)
python -m stlar batch-ssm data/ --ppm 595 --export-binned-jpgs --export-binned-csvs| Parameter | Type | Default | Description |
|---|---|---|---|
input_path |
str | required | Path to .egf file or directory containing .egf files |
--ppm |
int | required | Pixels per meter for position calibration |
--chunk-size |
int | 30 | Duration of each analysis chunk in seconds |
--speed-filter |
float | 0 | Minimum speed threshold (cm/s) for filtering stationary periods |
--window |
float | 1.0 | Spectral window duration in seconds |
--export-binned-jpgs |
flag | False | Export spatial bin visualizations as JPEG images |
--export-binned-csvs |
flag | False | Export binned spectral data as CSV files |
Process single session with default parameters:
python -m stlar batch-ssm recordings/rat01_day1.egf --ppm 595Batch process directory with 60-second chunks:
python -m stlar batch-ssm recordings/ --ppm 595 --chunk-size 60Apply speed filtering (exclude stationary periods):
python -m stlar batch-ssm recordings/ --ppm 595 --speed-filter 5.0Export binned analyses for spatial correlation studies:
python -m stlar batch-ssm recordings/ --ppm 595 \
--export-binned-jpgs \
--export-binned-csvs \
--chunk-size 60batch-ssm creates a timestamped output directory for each session:
<session_name>_SSMoutput_<YYYYMMDD_HHMMSS>/
βββ <session>_sessionAverage.csv # Session-wide PSD averages
βββ <session>_chunk_000_psd.csv # Per-chunk PSD data
βββ <session>_chunk_001_psd.csv
βββ ...
βββ binned_analysis/ # (if --export-binned-* used)
β βββ <session>_bin_0_0.csv # Spatial bin PSDs
β βββ <session>_bin_0_1.csv
β βββ ...
β βββ <session>_bin_visualization.jpg # (if --export-binned-jpgs)
CSV format:
- Columns: Frequency bins (e.g., 0.5 Hz, 1.0 Hz, ..., 250 Hz)
- Rows: PSD values in (Β΅VΒ²/Hz) for each chunk or spatial bin
"No .egf files found"
- Verify directory contains .egf files (Tint format)
- Check file permissions and path correctness
"Position data not found in .egf"
- Ensure tracking data is embedded in .egf file
- Verify correct .pxyabw file was integrated during Intan conversion
Binned analysis produces empty bins
- Check if tracking covers full environment (bins may be outside tracked area)
- Adjust
--speed-filterthreshold if filtering out too much data - Verify ppm calibration is correct (incorrect scaling affects spatial binning)
The prepare-dl command converts detected HFOs (EOIs) into deep learning training data with region-specific presets, behavioral state annotation, and optional train/validation splitting. Supports both single-session and batch modes.
# Simple preparation with auto-discovered position file
python -m stlar prepare-dl \
--eoi-file detections.txt \
--egf-file recording.egf \
--output training_data
# With behavior gating (speed annotation)
python -m stlar prepare-dl \
--eoi-file detections.txt \
--egf-file recording.egf \
--pos-file recording.pos \
--ppm 595 \
--output training_data
# With train/validation splitting
python -m stlar prepare-dl \
--eoi-file detections.txt \
--egf-file recording.egf \
--output training_data \
--split-train-val \
--val-fraction 0.2Process multiple sessions at once. Each subdirectory should contain .egf and EOI files (.txt or .csv):
# Batch process directory structure:
# data_batch/
# session_A/
# recording.egf
# detections.txt
# recording.pos (optional, auto-discovered)
# session_B/
# recording.egf
# detections.csv
# session_C/
# recording.egf
# detections.txt
python -m stlar prepare-dl \
--batch-dir data_batch \
--region Hippocampus \
--split-train-val \
-vBatch mode output:
- For each subdirectory:
session_name/prepared_dl/manifest.csv(+ train/val splits if--split-train-val) - Summary statistics printed showing processed/failed count
Advantages of batch mode:
- Process multiple sessions with one command
- Automatically detects EOI and EGF files in each subdirectory
- Creates output directories within each session folder for easy organization
- Useful for multi-animal or multi-session studies
| Parameter | Type | Default | Description |
|---|---|---|---|
--eoi-file |
str | - | Path to EOI file (.txt, .csv). Required for single-session mode |
--egf-file |
str | - | Path to .egf file. Required for single-session mode |
--batch-dir |
str | - | Directory with subdirectories containing .egf and EOI files. Enables batch mode |
-o, --output |
str | - | Output directory. Required for single-session mode |
--region |
str | LEC | Brain region preset (LEC, Hippocampus, MEC) |
--set-file |
str | auto | Optional .set file for bits-to-uV conversion |
--pos-file |
str | auto-detect | Optional .pos file for behavior gating (auto-discovered if not specified) |
--ppm |
int | - | Pixels-per-millimeter for position calibration (e.g., 595) |
--prefix |
str | seg | Prefix for segment filenames |
--skip-bits2uv |
flag | off | Skip bits-to-uV conversion |
--split-train-val |
flag | off | Split manifest into train/val sets |
--val-fraction |
float | 0.2 | Fraction of data for validation (0.0-1.0) |
--random-seed |
int | 42 | Random seed for reproducible splits |
-v, --verbose |
flag | off | Verbose progress logging |
Each brain region has pre-configured frequency bands, duration constraints, and speed thresholds:
LEC (Lateral Entorhinal Cortex)
- Ripples: 80-250 Hz, 15-120 ms
- Fast ripples: 250-500 Hz, 10-80 ms
- Immobile threshold: 0.0-5.0 cm/s
Hippocampus
- Ripples: 100-250 Hz, 15-120 ms
- Fast ripples: 250-500 Hz, 10-80 ms
- Immobile threshold: 0.0-5.0 cm/s
MEC (Medial Entorhinal Cortex)
- Ripples: 80-200 Hz, 15-120 ms
- Fast ripples: 200-500 Hz, 10-80 ms
- Immobile threshold: 0.0-5.0 cm/s
The prepare-dl command creates:
training_data/
βββ manifest.csv # Complete manifest (all events)
βββ manifest_train.csv # Training set (if --split-train-val)
βββ manifest_val.csv # Validation set (if --split-train-val)
βββ seg_00000.npy # Signal segments (16-bit float)
βββ seg_00001.npy
βββ seg_00002.npy
βββ ...
Manifest columns:
segment_path: Path to .npy signal filelabel: Event label (None if unlabeled)band_label: Band classification (ripple, fast_ripple, gamma)duration_ms: Event duration in millisecondsstate: Behavioral state (rest=immobile, active=moving, unknown=no position data)mean_speed_cm_s: Mean speed during event (cm/s)
Prepare labeled data for LEC:
python -m stlar prepare-dl \
--eoi-file scoring/session_HIL.txt \
--egf-file data/session.egf \
--set-file data/session.set \
--region LEC \
--output training_data/lec \
-vPrepare with behavior gating and PPM calibration:
python -m stlar prepare-dl \
--eoi-file scoring/session_HIL.txt \
--egf-file data/session.egf \
--pos-file data/session.pos \
--ppm 595 \
--region Hippocampus \
--output training_data/hippo \
-vPrepare with train/val split (80/20):
python -m stlar prepare-dl \
--eoi-file scoring/session_HIL.txt \
--egf-file data/session.egf \
--pos-file data/session.pos \
--ppm 595 \
--output training_data \
--split-train-val \
--val-fraction 0.2 \
--random-seed 42 \
-vPrepare with 70/30 split and custom seed:
python -m stlar prepare-dl \
--eoi-file scoring/session_HIL.txt \
--egf-file data/session.egf \
--output training_data \
--split-train-val \
--val-fraction 0.3 \
--random-seed 123 \
-vBatch mode: Process 5 animals (10 sessions) in one command:
# Directory structure:
# study_data/
# Animal_A_Session1/
# recording.egf
# detections.txt
# recording.pos
# Animal_A_Session2/
# recording.egf
# detections_ste.txt
# ...
# Animal_E_Session2/
# recording.egf
# detections.csv
python -m stlar prepare-dl \
--batch-dir study_data \
--region Hippocampus \
--ppm 595 \
--split-train-val \
--val-fraction 0.2 \
-v
# Output summary:
# BATCH PREPARED DL TRAINING DATA
# ============================================================
# Processed: 10 sessions
# Failed: 0 sessions
# Total events: 45,320
# Output base: study_data
# Region: Hippocampus
# ============================================================When a .pos file is provided, events are automatically annotated with behavioral state:
- Rest: Speed within region preset range (e.g., 0.0-5.0 cm/s) β immobile animal
- Active: Speed outside range β moving animal
- Unknown: No position data available
This enables training deep learning models on behavioral context:
# Example: Train only on immobile periods
train_data = manifest[manifest['state'] == 'rest']The --split-train-val flag creates two additional manifests:
- manifest_train.csv: Training set (default 80%)
- manifest_val.csv: Validation set (default 20%)
Splitting is:
- Random event-wise split when labels are absent
- Stratified split when labels exist (preserves label distribution)
- Reproducible with
--random-seedparameter
Example: Using the splits for training:
import pandas as pd
from pathlib import Path
# Load manifests
train_df = pd.read_csv('training_data/manifest_train.csv')
val_df = pd.read_csv('training_data/manifest_val.csv')
# Load signals
train_signals = [np.load(path) for path in train_df['segment_path']]
val_signals = [np.load(path) for path in val_df['segment_path']]
# Train model...This section describes the end-to-end workflow for training a custom deep learning model and using it for HFO detection.
The complete DL training pipeline has 4 steps:
- prepare-dl: Convert EOIs to training segments and manifests
- train-dl: Train a 1D CNN classifier on the prepared data
- export-dl: Export the trained model to production formats
- dl-batch: Use the exported model for detection on new recordings
python -m stlar prepare-dl \
--eoi-file detections.txt \
--egf-file recording.egf \
--output training_data \
--split-train-val \
--val-fraction 0.2Output:
training_data/manifest_train.csv- Training manifest (80% of events)training_data/manifest_val.csv- Validation manifest (20% of events)training_data/seg_*.npy- Signal segments (16-bit float arrays)
See: Deep Learning Training Data Preparation (prepare-dl) section above for detailed parameter documentation.
Train a 1D CNN classifier on the prepared training data:
Single-session mode:
python -m stlar train-dl \
--train training_data/manifest_train.csv \
--val training_data/manifest_val.csv \
--epochs 15 \
--batch-size 64 \
--lr 1e-3 \
--weight-decay 1e-4 \
--out-dir modelsWith CWT preprocessing (for 2D CNN models):
python -m stlar train-dl \
--train training_data/manifest_train.csv \
--val training_data/manifest_val.csv \
--model-type 6 \
--use-cwt \
--fs 4800 \
--epochs 15 \
--out-dir modelsBatch mode (train multiple sessions):
# Directory structure:
# study_data/
# Animal_A_Session1/prepared_dl/
# manifest_train.csv
# manifest_val.csv
# Animal_A_Session2/prepared_dl/
# manifest_train.csv
# manifest_val.csv
# ...
python -m stlar train-dl \
--batch-dir study_data \
--epochs 15 \
--batch-size 64 \
-v
# Output: Each session gets models/ subdirectory with best.ptParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
--train |
str | - | Path to training manifest CSV (single-session mode) |
--val |
str | - | Path to validation manifest CSV (single-session mode) |
--batch-dir |
str | - | Directory with subdirectories containing manifests (batch mode) |
--epochs |
int | 15 | Number of training epochs |
--batch-size |
int | 64 | Training batch size |
--lr |
float | 1e-3 | Learning rate |
--weight-decay |
float | 1e-4 | L2 regularization coefficient |
--out-dir |
str | models | Output directory for checkpoints (single-session only) |
--num-workers |
int | 2 | DataLoader worker processes |
--model-type |
int | 2 | Model architecture: 1=SimpleCNN, 2=ResNet1D (default), 3=InceptionTime, 4=Transformer, 5=2D_CNN, 6=HFO_2D_CNN |
--use-cwt |
flag | off | Enable CWT/Scalogram preprocessing for 2D models (types 5, 6) |
--fs |
float | 4800 | Sampling frequency in Hz (required when --use-cwt is enabled) |
--debug-cwt |
str | - | Save CWT scalogram images to specified directory for inspection (optional) |
--gui |
flag | off | Enable real-time training GUI with live loss curves and diagnostics |
--no-plot |
flag | off | Disable static training curve plot (GUI takes precedence) |
-v, --verbose |
flag | off | Verbose training progress |
CWT Debug Mode: To verify CWT preprocessing is working correctly, you can save scalogram images for visual inspection:
python -m stlar train-dl \
--train-manifest path/to/train.csv \
--val-manifest path/to/val.csv \
--use-cwt \
--fs 4800 \
--model-type 6 \
--debug-cwt debug_scalograms/This will save PNG images of the scalogram transformations with:
- Y-axis showing actual frequencies in Hz (80-500 Hz range)
- White dashed line at 250 Hz marking ripple/fast-ripple boundary
- Files named
scalogram_XXXXXX_HFO.pngorscalogram_XXXXXX_NonHFO.png
Output:
models/best.pt- Best model checkpoint (saved by validation loss)models/last.pt- Final model checkpoint- Training logs printed to console
Training Process:
- Trains for N epochs on training set
- Evaluates on validation set after each epoch
- Saves best model when validation loss improves
- Uses early stopping (stops if no improvement for 5 epochs)
- Device auto-detection (GPU if available, CPU fallback)
Convert the trained model to production formats:
Single-session mode:
python -m stlar export-dl \
--ckpt models/best.pt \
--onnx models/model.onnx \
--ts models/model.pt \
--example-len 2000Export CWT model (if trained with --use-cwt):
python -m stlar export-dl \
--ckpt models/best.pt \
--onnx models/model.onnx \
--ts models/model.pt \
--model-type 6 \
--use-cwtBatch mode (export multiple trained models):
# Automatically finds best.pt in each session's models/ subdirectory
python -m stlar export-dl \
--batch-dir study_data \
-v
# Output: Each session gets TorchScript and ONNX exports
# E.g., study_data/Animal_A_Session1/models/Animal_A_Session1_model.ptParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
--ckpt |
str | - | Path to best.pt checkpoint (single-session mode) |
--batch-dir |
str | - | Directory with subdirectories containing best.pt (batch mode) |
--onnx |
str | - | Output path for ONNX model (single-session) or suffix (batch) |
--ts |
str | - | Output path for TorchScript model (single-session) or suffix (batch) |
--example-len |
int | 2000 | Example segment length for tracing |
--model-type |
int | 2 | Model architecture (must match training model type) |
--use-cwt |
flag | off | Enable CWT/Scalogram preprocessing (must match training setting) |
-v, --verbose |
flag | off | Verbose logging |
Output:
models/model.onnx- ONNX format (cross-platform inference)models/model.pt- TorchScript format (fast inference, pure C++)
Note: ONNX export requires pip install onnx. If onnx is not installed, only TorchScript will be saved.
Detect HFOs in new recordings using the trained model:
python -m stlar dl-batch \
-f recording.egf \
--model-path models/model.pt \
--threshold 0.5 \
-o results/Result: Detections saved to results/recording_DL.txt (start_ms, stop_ms format)
This example trains a model on one session and tests it on another:
#!/bin/bash
# 1. Detect HFOs in training session using consensus voting
python -m stlar consensus-batch \
-f data/session_A.egf \
-o scoring/
# Output: scoring/session_A_CONSENSUS.txt
# 2. Prepare training data from detected HFOs
python -m stlar prepare-dl \
--eoi-file scoring/session_A_CONSENSUS.txt \
--egf-file data/session_A.egf \
--set-file data/session_A.set \
--output training_data \
--split-train-val \
--val-fraction 0.2
# Output: training_data/manifest_train.csv, manifest_val.csv, seg_*.npy
# 3. Train the model
python -m stlar train-dl \
--train training_data/manifest_train.csv \
--val training_data/manifest_val.csv \
--epochs 20 \
--batch-size 32 \
--lr 5e-4 \
--out-dir models
# Output: models/best.pt, models/training_curves.png, models/training_metrics.json
# 3a. Train with real-time GUI monitoring
python -m stlar train-dl \
--train training_data/manifest_train.csv \
--val training_data/manifest_val.csv \
--epochs 20 \
--gui
# Opens interactive window with live loss curves and diagnostics
# 4. Export to production formats
python -m stlar export-dl \
--ckpt models/best.pt \
--onnx models/hfo_detector.onnx \
--ts models/hfo_detector.pt
# Output: models/hfo_detector.pt, models/hfo_detector.onnx
# 5. Test on new recording
python -m stlar dl-batch \
-f data/session_B.egf \
--model-path models/hfo_detector.pt \
--threshold 0.5 \
-o results/
# Output: results/session_B_DL.txtProcess multiple animals/sessions in one workflow:
#!/bin/bash
# 1. Prepare training data for multiple sessions (batch mode)
python -m stlar prepare-dl \
--batch-dir study_data \
--region Hippocampus \
--ppm 595 \
--split-train-val \
-v
# Output: Each session gets prepared_dl/ with manifests and segments
# 2. Train models for all sessions (batch mode)
python -m stlar train-dl \
--batch-dir study_data \
--epochs 15 \
--batch-size 64 \
-v
# Output: Each session gets models/best.pt
# 3. Export all trained models (batch mode)
python -m stlar export-dl \
--batch-dir study_data \
-v
# Output: Each session gets .pt and .onnx exports
# Summary output:
# BATCH TRAINING COMPLETE
# ============================================================
# Successful: 10 sessions
# Failed: 0 sessions
# Output base: study_data
# ============================================================The main hyperparameters to tune are:
| Parameter | Range | Default | Effect |
|---|---|---|---|
--lr (learning rate) |
1e-5 to 1e-2 | 1e-3 | Too high: unstable training. Too low: slow convergence |
--batch-size |
8 to 256 | 64 | Lower: more noise but better generalization. Higher: faster training but less stable |
--weight-decay |
0 to 1e-2 | 1e-4 | Higher: more regularization, reduces overfitting |
--epochs |
5 to 100 | 15 | More epochs can improve performance (with early stopping) |
Tuning advice:
- Start with defaults (lr=1e-3, batch-size=64)
- If validation loss plateaus, try reducing learning rate by 2-5x
- If model overfits (val loss worse than train), increase weight-decay to 1e-3
- If training is unstable, reduce batch-size to 32 or lower
- Monitor validation loss; if no improvement for 5 epochs, training stops automatically
The train-dl command includes both static visualization (saved plots) and optional real-time GUI monitoring to help diagnose issues and optimize hyperparameters.
Enable live training visualization with the --gui flag:
python -m stlar train-dl \
--train data/manifest_train.csv \
--val data/manifest_val.csv \
--epochs 20 \
--guiThe GUI window shows:
- Live loss curves updating every epoch (train vs val)
- Current metrics (epoch, train loss, val loss, gap, improvement)
- Improvement plot showing validation loss delta per epoch
- Generalization gap (val - train loss) over time
- Training stability (rolling standard deviation)
- Diagnostics log with warnings for overfitting, plateaus, instability
- Stop button to halt training early if needed
GUI Features:
- β Real-time updates after each epoch
- β Automatic detection of overfitting, plateaus, and instability
- β Manual stop button to halt training gracefully
- β GUI stays open after training completes for review
- β All 4 diagnostic plots synchronized with console output
- β Works on Windows, macOS, and Linux
- β
No configuration needed - just use
--guiflag
When to use GUI:
- Interactive training sessions where you want to monitor progress
- Experimenting with hyperparameters and need immediate feedback
- Long training runs (20+ epochs) where you want to check status
- Teaching/demonstrations to show training dynamics
- Debugging overfitting or convergence issues
When to skip GUI:
- Batch processing multiple models (use
--no-plotinstead) - Running on remote servers without display
- Server/background training jobs
Even without the GUI, train-dl saves training curves automatically:
python -m stlar train-dl \
--train data/manifest_train.csv \
--val data/manifest_val.csv \
--epochs 20 \
--out-dir modelsAutomatic output:
models/training_curves.png- 4-panel diagnostic plot (loss, improvement, gap, stability)models/training_metrics.json- Metrics in JSON format for programmatic analysismodels/best.pt- Best model checkpoint
The diagnostic plot shows:
- Loss curves (overfitting indicator)
- Validation improvement rate (convergence speed)
- Generalization gap (model accuracy)
- Training stability (variance)
- Automated pipelines and scripts
- Quick test runs
Automatic Outputs:
After training completes, you'll find in the output directory:
training_curves.png- Comprehensive 4-panel diagnostic plottraining_metrics.json- Complete training history in JSON format
What the visualization shows:
-
Training vs Validation Loss (top-left)
- Blue line: training loss over epochs
- Red line: validation loss over epochs
- Automatically detects and flags overfitting (when val >> train)
-
Loss Improvement per Epoch (top-right)
- Shows how much validation loss decreases each epoch
- Automatically detects plateaus (no improvement for 5+ epochs)
-
Generalization Gap (bottom-left)
- Shows val_loss - train_loss over time
- Red zone indicates overfitting
- Ideal: gap should be small and stable
-
Training Stability (bottom-right)
- Shows rolling standard deviation of losses
- Detects unstable training (high variance)
- Helps identify when to reduce batch size
Example output:
Epoch 10/20 | Train: 0.3245 | Val: 0.3512 | Gap: +0.0267 | Ξ: -0.0023
β New best! Saved to models/best.pt
============================================================
Training complete!
Best epoch: 10 | Best val loss: 0.3512
============================================================
Generating training visualizations...
β Saved training curves to models/training_curves.png
β Saved training metrics to models/training_metrics.json
π Training Diagnostics:
----------------------------------------
β No significant overfitting
β οΈ PLATEAU detected
Val loss not improving
β Try: Reduce --lr by 2-5x
β Training is stable
----------------------------------------
Interpreting the diagnostics:
| Warning | What it means | Action |
|---|---|---|
| Val loss much higher than train loss | Increase --weight-decay to 1e-3 |
|
| Val loss stopped improving | Reduce --lr by 2-5x (e.g., from 1e-3 to 2e-4) |
|
| Large swings in loss values | Reduce --batch-size to 32 or lower |
Disabling plots:
If you don't want visualization (e.g., for batch processing on servers without display):
python -m stlar train-dl \
--train data/manifest_train.csv \
--val data/manifest_val.csv \
--no-plotAccessing training history programmatically:
The training_metrics.json file contains all loss values:
{
"train_loss": [0.6234, 0.4521, 0.3845, ...],
"val_loss": [0.6421, 0.4712, 0.3912, ...],
"best_epoch": 10,
"best_val_loss": 0.3512
}You can load this to create custom visualizations or analyses:
import json
import matplotlib.pyplot as plt
with open('models/training_metrics.json') as f:
history = json.load(f)
plt.plot(history['train_loss'], label='Train')
plt.plot(history['val_loss'], label='Val')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.savefig('custom_plot.png')Requirements:
The visualization feature requires matplotlib:
pip install matplotlibIf matplotlib is not installed, training will still work but plots will be skipped with a warning.
The 1D CNN uses:
- Input: 1D signal segments (variable length, typically 2000-5000 samples)
- Architecture: 3 convolutional blocks with batch norm and max pooling
- Output: Binary logit (HFO vs non-HFO)
- Loss: Binary cross-entropy with logits
- Optimizer: Adam with weight decay
See hfoGUI/dl_training/model.py for implementation details.
| Issue | Solution |
|---|---|
| CUDA out of memory | Reduce --batch-size to 32 or lower |
| Very high training loss | Check data format (should be float32, normalized) |
| Validation loss not decreasing | Try smaller --lr (e.g., 5e-4), increase --weight-decay |
| Training very slow | Increase --batch-size, reduce number of --num-workers |
| Model crashes during export | Ensure checkpoint file is valid .pt file from training |
| ONNX export fails | Install with: pip install onnx onnxruntime |
| Plots not generated | Install matplotlib: pip install matplotlib |
Using trained model in Python:
import torch
from hfoGUI.dl_training.model import build_model
# Load checkpoint
ckpt = torch.load('models/best.pt', weights_only=False)
model = build_model()
model.load_state_dict(ckpt['model_state'])
model.eval()
# Inference
with torch.no_grad():
signal = torch.randn(1, 1, 2000) # batch_size=1, channels=1, length=2000
logit = model(signal)
prob = torch.sigmoid(logit)Using TorchScript export:
import torch
# Load TorchScript model (no PyTorch training code needed)
model = torch.jit.load('models/model.pt')
signal = torch.randn(1, 1, 2000)
logit = model(signal)
prob = torch.sigmoid(logit)Using ONNX export:
import onnxruntime as rt
import numpy as np
# Load ONNX model
sess = rt.InferenceSession('models/model.onnx')
signal = np.random.randn(1, 1, 2000).astype(np.float32)
logit = sess.run(None, {'input': signal})[0]
prob = 1 / (1 + np.exp(-logit))- Location:
hfoGUI/andstlar/ - Entry:
python -m stlar - See: docs/CONSENSUS_QUICKSTART.md, docs/CONSENSUS_DETECTION.md
- Location:
spatial_mapper/ - Entry:
python -m stlar spatial-gui(GUI) orpython -m stlar batch-ssm(CLI) - See: spatial_mapper/README.md
- Location:
hfoGUI/dl_training/ - Entry:
python -m stlar prepare-dl(CLI) or GUI HFO Detection window - See: hfoGUI/dl_training/README.md
This project unifies:
- PyhfoGUI - Temporal LFP analysis
- Spatial_Spectral_Mapper - Spatial LFP analysis
from hfoGUI.core.Detector import Detector
# Create detector instance
detector = Detector('mydata.eeg', fs=1250) # fs = sampling frequency
# Detect ripples
ripples = detector.detect_ripples(
method='hilbert', # 'hilbert', 'ste', 'mni', 'consensus', 'dl'
freq_min=80, # Hz
freq_max=250, # Hz
threshold_sd=3.5, # standard deviations
min_dur_ms=15, # minimum duration
max_dur_ms=120, # maximum duration
)
# Detect fast ripples
fast_ripples = detector.detect_ripples(
method='hilbert',
freq_min=250,
freq_max=500,
threshold_sd=3.5,
min_dur_ms=10,
max_dur_ms=80,
)
# Result format: list of dicts
# [{'start': 1.23, 'end': 1.35}, {'start': 4.56, 'end': 4.62}, ...]from hfoGUI.dl_training.data import SegmentDataset
from torch.utils.data import DataLoader
# Load prepared training data
dataset = SegmentDataset('manifest.csv')
# Create a DataLoader for batching
loader = DataLoader(
dataset,
batch_size=64,
shuffle=True,
num_workers=2
)
# Iterate through batches
for batch_x, batch_y in loader:
# batch_x: (batch_size, 1, signal_length)
# batch_y: (batch_size,) with values 0 or 1
passfrom hfoGUI.dl_training.model import build_model
# Available architectures: 1, 2, 3, 4, 5
model = build_model(model_type=2) # ResNet1D (default)
model = model.to('cuda' if torch.cuda.is_available() else 'cpu')
# Inference on custom signal
import torch
signal = torch.randn(1, 1, 2000) # (batch, channels, length)
with torch.no_grad():
logit = model(signal)
prob = torch.sigmoid(logit).item()- Installation Issues? β See Installation > Troubleshooting
- How to use the GUI? β See GUI Workflow
- Command-line options? β See CLI Reference
- Algorithm details? β See Technical Reference
- Deep Learning training? β See Complete Deep Learning Training Workflow
# Try this:
python -m stlar gui -v
# If you see "ModuleNotFoundError: No module named 'PyQt5'":
pip install PyQt5- Windows: Use
python3instead, or add Python to PATH - macOS/Linux: Try
python3 -m stlar gui
- Make sure the file path is correct (use absolute paths if unsure)
- File must be
.eegor.egfformat - Check file isn't being used by another program
- This is normal for the first detection (model loading takes time)
- For faster results, use GPU:
pip install torch torchvision torchaudiowith CUDA support - Or use simpler methods like Hilbert which are very fast
- Install PyTorch:
pip install torch(CPU) or follow Installation > Deep Learning for GPU - STLAR will run without torch, but DL features won't work
- Make sure you're in the STLAR directory:
cd STLAR/ - And virtual environment is activated:
conda activate stlarorsource stlar/bin/activate
Found a bug? Report it on GitHub Issues
When reporting, include:
- Your OS (Windows/macOS/Linux)
- Python version:
python --version - What you were doing when the error occurred
- The full error message (copy-paste from terminal)
---Detection not finding HFOs?
- Check you're using correct frequency band for your data
- Lower the threshold (e.g.,
--threshold-sd 2.5) - Try different detection methods (consensus is most reliable)
- See CLI Reference for parameter tuning guides
File format errors?
- Ensure files are .eeg or .egf format
- Check file isn't corrupted: try opening in Tint first
- Verify file path has no spaces or special characters
Need more details?
- π For scientists & engineers: Read docs/TECHNICAL_REFERENCE.md for all algorithms, formulas, and implementation details
- π For consensus voting: Check docs/CONSENSUS_DETECTION.md for theory
- π Quick reference: docs/CONSENSUS_SUMMARY.md has a quick comparison table
- π§ Run with
-vflag for verbose output
Found a bug? Have a feature request? β Open an issue: github.com/HussainiLab/STLAR/issues
Include:
- What command you ran
- Error message (full traceback)
- Python version:
python --version - Operating system
- β
Auto-saves all training parameters to
training_params.jsonat start and end of training - β Intelligent recommendations that adapt based on current parameters (not hardcoded)
- β Tracks training metrics: best epoch, final losses, training time
- β Automatic diagnostics: detects overfitting, plateau, instability, suggests next steps
Example output:
{
"timestamp": "2026-01-02T10:30:45.123456",
"device": "cuda",
"learning_rate": 0.001,
"weight_decay": 0.001,
"best_epoch": 12,
"final_train_loss": 0.2145,
"final_val_loss": 0.3421,
"overfitting_detected": true,
"tuning_recommendations": [
"Overfitting (gap=0.1276): weight_decay already at 0.001, try 0.01 or add dropout"
]
}- β 10-50x faster DL detection with batch processing (256 windows at a time)
- β GPU support - automatic detection of CUDA availability
- β Better memory efficiency - processes long recordings without OOM
- β "None" option in GUI - skip region-specific filtering if you want raw detection
- β
"None" option in CLI -
--region Nonefor no preset filtering (new default) - β Smart recommendations for next training iteration based on diagnostics
- β Duration calculated automatically when adding EOIs manually
- β Robust ID creation - handles malformed IDs gracefully without crashing
- β Flexible brain region selection - "None" option for users who don't want region filtering
- β
ONNX opset upgraded to version 17 (was 14) - now supports
stftoperator for more models - β Better error handling - gracefully falls back to TorchScript if ONNX export fails
- β Tab order reorganized: "Automatic Detection" is now first tab, "Score" is second (more intuitive workflow)
- β
Comprehensive Help (How to Score tab):
- Interactive table of contents with section links
- HTML-styled documentation with detailed explanations
- Detection parameter table with literature justifications
- Step-by-step workflows for HFO metrics analysis and DL training
- Hyperparameter guidance with practical recommendations
- Score column descriptions and usage examples
- β Auto-load .pos files: When importing a set file, if a .pos file exists in the same directory, it's automatically added as a "Speed" source
- β
Source labeling on graphs: Y-axis shows meaningful labels:
- For .egf/.eeg files: Filter cutoff range (e.g., "4-12" Hz)
- For .pos files: "Speed"
- For other files: Source file extension
- β
Behavioral State column: Added to Score tab to show "rest" vs "active" state during each HFO
- Automatically computed from .pos speed data when EOIs imported
- Updated when region preset applied (only for "unknown" states)
- Shows "unknown" for manual scores without speed data
- β Speed threshold re-computation: When preset applied, behavioral states recalculated for previously "unknown" entries using new speed thresholds
- β
Score label updates on preset apply: When a region preset is applied, all existing Score items re-labeled based on duration thresholds:
- Ripple: Event duration within ripple_min_ms to ripple_max_ms
- Fast Ripple: Event duration within fast_ripple_min_ms to fast_ripple_max_ms
- None: Event duration outside both ranges (ambiguous)
- β No "Sharp Wave Ripple" labels: Ambiguous events labeled as "None" for clarity
- β Threshold (SD) refers to Hilbert: Clarified in region preset that "Threshold" controls Hilbert detector sensitivity
- β "Auto" as default scorer: Changed from generic "Scorer 1" to "Auto" (indicates automated detection or automated processing)
- β Applied consistently: Default "Auto" across addScore, addEOI, and loadScores functions
- β
Two distinct export buttons:
- Export HFO Metrics (CSV): Exports quantitative metrics (ripple rates, pathology scores, rest/active breakdown) for Ripple-family events only
- Export for DL Training: Exports labeled segments (30ms windows), manifest CSV, and metrics summary for model training
- β Behavioral state preservation: Both export paths preserve computed behavioral_state metadata
- β Co-occurrence detection integration: Detects ripple+fast_ripple overlaps, marks as "co-occurrence" for analysis
- β Fixed loadScores crash: UnboundLocalError when loading saved scores (variables now properly initialized)
- β Case-insensitive behavioral state check: Re-computation checks work with "unknown", "Unknown", "UNKNOWN", etc.
- β Filter cutoff label extraction: Safely handles missing or malformed cutoff values in source labels
- β 25ms event merging: All detection methods (Hilbert, STE, MNI, Consensus, DL) apply post-detection 25ms merge window to reduce over-fragmentation
- β Peak validation for all detectors: STE, MNI, and Consensus now validate minimum 4 peaks at 5 SD threshold (previously only Hilbert)
- β Updated default parameters: Changed from 6 peaks at 2 SD β 4 peaks at 5 SD for more stringent HFO validation
- β Consensus merge window: Increased from 10ms to 25ms for better event consolidation
- β
DL window defaults: DL CLI now exposes
--window-size/--overlap, defaulting to 0.1 s and 0.5 (β75 ms internal merge prior to the 25 ms post-merge standard) - β Behavioral state computation in exports: Behavioral state computed and stored in manifest for all exported segments
- β Region preset support: CLI respects region-specific duration and behavior thresholds when exporting
All detection parameters standardized to epilepsy literature best practices with universal peak validation:
| Detector | Parameter | Value | Justification |
|---|---|---|---|
| Hilbert | Threshold (SD) | 3.5 | 3.5Ο above baseline; literature standard |
| Required Peaks | 4 | Minimum 4 peaks @ 5Ο confirms oscillatory content | |
| Peak Threshold (SD) | 5.0 | 5Ο above baseline; stringent validation | |
| Min Duration | 10 ms | Shortest physiological HFO duration | |
| Merge Window | 25 ms | Post-detection merging for robustness | |
| STE | Threshold (RMS) | 2.5 | 2.5Γ baseline RMS energy |
| Required Peaks | 4 | Peak validation (same as Hilbert) | |
| Peak Threshold (SD) | 5.0 | Validates genuine oscillations | |
| MNI | Threshold (Percentile) | 98% | Top 2% of energy distribution |
| Required Peaks | 4 | Peak validation (same as Hilbert) | |
| Peak Threshold (SD) | 5.0 | Validates genuine oscillations | |
| Consensus | Overlap Window | 25 ms | Merge window for voting |
| Region Presets | Ripple Duration | 10-150 ms | Ripple-specific range |
| Fast Ripple Duration | 10-50 ms | Faster, pathological oscillations |
Updated JSON configuration files in settings/ directory:
hilbert_params.json- Hilbert detection parametersste_params.json- STE detection parametersmni_params.json- MNI detection parametersconsensus_params.json- Consensus voting parametersprofiles.json- Region presets (LEC, Hippocampus, MEC)
All parameters now include 25ms merge window and standardized thresholds.
- β Real-time training monitor with live loss curves and diagnostics plots
- β Automatic diagnostics for overfitting detection, loss plateaus, and training instability
- β Early stopping (stops after 5 epochs without improvement)
- β
Static training plots saved automatically (
training_curves.png,training_metrics.json)
- Refactored training GUI for better PyQt5 compatibility
- Lazy imports for GPU/CUDA modules (better for CPU-only installations)
- Improved cross-platform compatibility (Windows, macOS, Linux)
GPL-3.0 License - see LICENSE file for details.
STLAR - Advancing spatio-temporal understanding of neural oscillations π§
