A high-performance, general-purpose C++ implementation of the Adaptive Chirplet Transform for time-frequency analysis of non‑stationary signals. Suitable for audio, radar/sonar, biomedical, and other domains.
The Adaptive Chirplet Transform (ACT) is a powerful signal processing technique that decomposes signals into chirplets - Gaussian-enveloped sinusoids with time-varying frequency. This implementation provides:
- High Performance: Multi-platform CPU-only with BLAS and GPU-accelerated code targeting Apple Metal and Intel x86/CUDA using the Apple MLX library.
- Flexible Analysis: Configurable parameter ranges for different types of analysis.
- Example Tests and Applications: Python bindings and example applications focused on EEG analysis.
This implementation uses a two-stage, greedy matching pursuit approach explored in the referenced papers:
-
Coarse dictionary search
- Build a discrete grid of chirplet parameters (tc, fc, logDt, c).
- Generate unit-energy chirplet templates and compute correlations against the current residual.
- Select the best-scoring atom as the initialization.
-
Local continuous optimization
- Refine the selected atom’s parameters via BFGS over (tc, fc, logDt, c) to maximize correlation.
- Estimate the optimal coefficient (least-squares against unit-energy template).
-
Greedy update and iterate
- Subtract the reconstructed chirplet from the residual and repeat steps (1–2) up to the chosen transform order K.
- Core algorithm: The baseline implementation performs dictionary-based matching pursuit with unit-energy chirplet generation and BFGS refinement.
search_dictionaryis virtual, enabling backend overrides. - CPU backends:
ACT_CPU(Eigen + BLAS baseline) andACT_Accelerate(Apple Accelerate-optimized) provide fast CPU execution. BLAS is used on Linux; Accelerate on macOS. - MLX backend (GPU, float32):
ACT_MLXenables a GPU-accelerated coarse search using Apple MLX when compiled withUSE_MLX=ON. It pre-packs the dictionary to device and runsscores = transpose(dict) @ xandargmaxon device. Double precision falls back to the CPU path. MLX can be compiled to run on Apple Metal or CUDA. - Python bindings:
python/act_bindingsexposespyact.mpbfgswithActCPUEngine,ActMLXEngine, and a backward-compatibleActEngine(CPU by default).transform(...)returns a rich dict. Tests are included. - MLX build integration:
scripts/setup_mlx.shinstalls the vendored MLX intoactlib/lib/mlx/install/. CMake optionsUSE_MLX,MLX_INCLUDE,MLX_LIB, andMLX_LINKconfigure the Python build.scripts/build_pyact_mlx.shbuilds the wheel with MLX enabled. - EEG ML utilities:
python/act_mlpprovides feature extraction based on ACT (defaults toActMLXEngine) and a simple MLP training pipeline (Work in progress). - Profiling:
profile_act.cppandprofile_act_mt.cppmeasure end-to-end timings using a sample dictionary (dictionary, search, transform, SNR), single signal and batch of signals. Use--helpto see options to select backend and precision. - CLI analyzer:
eeg_act_analyzersupports interactive exploration of CSV EEG data and ACT parameters.
How to try the faster search today
- Use
ACT_CPU(CPU) orACT_MLX(float32 MLX when enabled):- Instantiate
ACT_CPUorACT_MLX, callgenerate_chirplet_dictionary(), then runsearch_dictionary(...)ortransform(...). ACT_MLXoffloads the coarse search to MLX (GPU) when the project is built withUSE_MLX=ON.
- Instantiate
In the following example, we show how to use the ACT_MLX class to perform a chirplet transform on a signal.
double fs = 128.0; // Signal Sampling frequency
int length = 128; // Analysis window length
//Dictionary ranges
ACT::ParameterRanges ranges(
0, length, 16, // tc: time center (16 values)
2.0, 12.0, 2.0, // fc: frequency center (2 values)
-3.0, -1.0, 1.0, // logDt: duration range (16 values)
-10.0, 10.0, 10.0 // c: chirp rate (21 values)
);
// Initialize ACT_MLX
ACT_MLX act(fs, length, ranges, true);
// Generate dictionary in memory
int dict_size = act.generate_chirplet_dictionary();
std::cout << "Dictionary generated: " << dict_size << " atoms\n";
// Transform signal
// Signal is a vector<double|float> of length `length`
// Transform order is the number of chirplets to find (i.e. how many iteration on the residual signal to perform)
int transform_order = 2;
ACT::TransformResult res = act.transform(signal, transform_order);
std::cout << "Chirplets found: " << res.params.rows() << "\n";
for (int i = 0; i < res.params.rows(); ++i) {
std::cout << " #" << (i+1) << ": tc=" << res.params(i,0)
<< ", fc=" << res.params(i,1)
<< ", logDt=" << res.params(i,2)
<< ", c=" << res.params(i,3)
<< ", coeff=" << res.coeffs[i] << "\n";
}The result is a TransformResult object containing the chirplets found, the residual signal, the error, and the signal reconstructed from the chirplets.
- C++17 compatible compiler
- macOS: Xcode Command Line Tools (Accelerate framework)
- Linux: BLAS/LAPACK (
sudo apt-get install libblas-dev liblapack-dev) - Python 3.8+ (for bindings),
pip
# Clone the repository
git clone <repository-url>
cd <repository-name>
# Build all targets
make all
# Runs tests
make testPerformance profiling with default settings
make profile# 0) Initialize and build the vendored MLX once
git submodule update --init --recursive
bash scripts/setup_mlx.sh # builds and installs into actlib/lib/mlx/install/
# 1) Build all C++ targets with MLX enabled on macOS
make USE_MLX=1 \
MLX_INCLUDE="$(pwd)/actlib/lib/mlx/install/include" \
MLX_LIB="$(pwd)/actlib/lib/mlx/install/lib" \
MLX_LINK="-lmlx" \
allNotes:
- On macOS the Makefile links Apple Accelerate and Metal frameworks automatically.
- The MLX GPU coarse search path currently runs in float32; double precision falls back to the CPU path.
Prerequisites (Linux/CUDA):
- NVIDIA Driver and CUDA Toolkit (nvcc)
- cuBLAS and cuDNN runtime/dev libraries on your system library path
- Optional: set
CUDA_HOMEand ensurenvccis inPATH
Suggested packages for Ubuntu (on top of typical CUDA setup):
sudo apt-get install libblas-dev liblapack-dev liblapacke-dev
sudo apt-get install libnccl2 libnccl-dev# 0) Initialize and build the vendored MLX once (CUDA backend)
git submodule update --init --recursive
bash scripts/setup_mlx_cuda.sh # builds and installs into actlib/lib/mlx/install/
# 1) Build all C++ targets with MLX enabled on Linux/CUDA
make USE_MLX=1 \
MLX_INCLUDE="$(pwd)/actlib/lib/mlx/install/include" \
MLX_LIB="$(pwd)/actlib/lib/mlx/install/lib" \
MLX_LINK="-lmlx -lcublasLt -lcublas -lcudnn" \
all
# If the CUDA libraries are not found at runtime, export LD_LIBRARY_PATH, e.g.:
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"
# You may also need: MLX_LINK="-lmlx -lcublasLt -lcublas -lcudnn -lcudart"import numpy as np
from pyact.mpbfgs import ActCPUEngine, ActMLXEngine
fs, length = 256.0, 256
ranges = dict(
tc_min=0, tc_max=length-1, tc_step=8,
fc_min=2, fc_max=20, fc_step=2,
logDt_min=-3, logDt_max=-1, logDt_step=0.5,
c_min=-10, c_max=10, c_step=5,
)
# CPU backend (double)
cpu = ActCPUEngine(fs, length, ranges, True, True, "")
# MLX backend (float32); falls back to CPU if MLX not compiled in
mlx = ActMLXEngine(fs, length, ranges, True, True, "")
#random signal for testing
x = np.random.randn(length)
out = mlx.transform(x, order=3)
print("Error:", float(out["error"]))
print("First component params:", out["params"][0])# From repo root (optionally in a venv)
python3 -m pip install -v ./python/act_bindings- Initialize the MLX submodule (only once per clone):
# If you did not clone with --recurse-submodules
git submodule update --init --recursive
# Alternatively, clone with submodules
# git clone --recurse-submodules <repository-url>
- Build/install MLX into the vendored path:
bash scripts/setup_mlx.sh- Create a venv (optional but recommended):
python3 -m venv python/.venv
source python/.venv/bin/activate- Install the extension with MLX enabled. Either:
# A) Use environment variable (easiest)
CMAKE_ARGS="-DUSE_MLX=ON \
-DMLX_INCLUDE=$(pwd)/actlib/lib/mlx/install/include \
-DMLX_LIB=$(pwd)/actlib/lib/mlx/install/lib \
-DMLX_LINK=-lmlx" \
python3 -m pip install -v ./python/act_bindings
# B) Use repeated --config-settings (pip)
python3 -m pip install -v ./python/act_bindings \
--config-settings=cmake.args=-DUSE_MLX=ON \
--config-settings=cmake.args=-DMLX_INCLUDE=$(pwd)/actlib/lib/mlx/install/include \
--config-settings=cmake.args=-DMLX_LIB=$(pwd)/actlib/lib/mlx/install/lib \
--config-settings=cmake.args=-DMLX_LINK=-lmlx
# Or simply run the convenience script (does A for you)
chmod +x scripts/build_pyact_mlx.sh
./scripts/build_pyact_mlx.shDuring configure you should see:
pyact: USE_MLX=ONpyact: MLX_INCLUDE=...pyact: Found MLX header at .../mlx/mlx.h
After a successful build/install, run quick smoke tests to verify the installation:
python3 -m pytest -q python/act_bindings/testsAn interactive command-line tool is included to explore EEG CSV data and run ACT over selected windows.
Build the analyzer and run it:
# Build the analyzer (or `make all`)
make eeg-analyzer
# Run the interactive CLI
./bin/eeg_act_analyzerExample session using the included sample data data/muse-testdata.csv (Muse TP9, fs=256 Hz):
> load_csv data/muse-testdata.csv
> select 1 0 2048 # column_index start_sample num_samples
> show_params # view current tc/fc/logDt/c ranges and size estimate
> params fc 25 49 1 # set frequency range (Hz)
> params logDt -3.0 -0.7 0.3 # set log-duration grid
> params c -15 15 3 # set chirp-rate grid (Hz/s)
> create_dictionary # builds dictionary for current window length
> analyze 5 0.01 # find top 5 chirplets, stop if residual < 0.01
# Sliding-window analysis over samples with overlap
> analyze_samples 3 4096 256 # num_chirps end_sample overlap
> exit
Available commands in the CLI (eeg_act_analyzer.cpp):
load_csv <filepath>: Load a CSV; first row is treated as headers. Non-numeric cells become NaN.select <column_idx> <start_sample> <num_samples>: Choose a segment. DC offset is removed. NaNs are filtered.params <tc|fc|logDt|c> <min> <max> <step>: Adjust dictionary parameter ranges.show_params: Print current parameter ranges and an estimated dictionary memory footprint.create_dictionary: Construct the dictionary for the selected segment length.analyze <num_chirplets> <residual_threshold>: Run ACT and print chirplet parameters and residuals.analyze_samples <num_chirps> <end_sample> <overlap>: Slide a window of dictionary length across the signal.help/exit.
Notes:
- Sampling frequency defaults to 256 Hz (Muse). Adjust code if your data differs.
tcis in samples; reported time is converted to seconds astc / fs.- Duration is reported as
1000 * exp(logDt)in milliseconds. - The analyzer uses
linenoisefor history; a local history file.eeg_analyzer_history.txtis created.
ACT (Base)
├── ACT_CPU (Eigen + BLAS baseline)
├── ACT_Accelerate (Apple Accelerate-optimized CPU)
└── ACT_MLX (MLX-accelerated coarse search for float32; CPU fallback otherwise)
- Dictionary Search: Fast discrete parameter matching
- BFGS Optimization: Continuous parameter refinement
- tc (Time Center): When the chirplet occurs
- fc (Frequency Center): Base frequency of oscillation
- logDt (Duration): Logarithmic duration parameter
- c (Chirp Rate): Frequency modulation rate (Hz/s)
logDtis the natural log of the Gaussian time-scaleDtin seconds used in the envelope:g[n] = exp(-0.5 * ((t - tc) / Dt)^2) * cos(2π( c (t - tc)^2 + fc (t - tc) )).- To choose dictionary bounds from desired durations (seconds):
logDt_min = ln(Dt_min_s)logDt_max = ln(Dt_max_s)
- Examples (seconds →
logDt):- 50 ms = 0.050 s →
logDt ≈ ln(0.050) = -2.9957 - 100 ms = 0.100 s →
logDt ≈ -2.3026 - 250 ms = 0.250 s →
logDt ≈ -1.3863 - 500 ms = 0.500 s →
logDt ≈ -0.6931
- 50 ms = 0.050 s →
- If you think in samples at sampling rate
fs(Hz):Dt_s = Dt_samples / fslogDt = ln(Dt_samples / fs)- Example:
Dt_samples = 64,fs = 256 Hz→logDt = ln(64/256) = ln(0.25) ≈ -1.3863
- Relation to Gaussian FWHM: for
exp(-0.5*(t/Dt)^2),FWHM ≈ 2.355·Dt. If you specify FWHM in seconds, uselogDt = ln(FWHM / 2.355). - Typical EEG windows (fs ≈ 256 Hz, 1 s windows): short bursts 50–500 ms ⇒
logDt ∈ [ln(0.05), ln(0.5)] ≈ [-3.00, -0.69]. The examples in this README (-3 .. -1) cover ≈50–370 ms. - Step size guidance:
logDt_stepof0.25–0.5is a good starting point (gives geometric spacing inDt). - Output mapping: results often report
duration_ms = 1000 * exp(logDt). This is the inverse of the above. - Implementation note: internally,
logDtis clamped to[-10, 2]for stability; choose bounds within this interval.
The original Python reference implementation generated chirplets without unit-energy normalization.
def g(self, tc=0, fc=1, logDt=0, c=0):
"""
tc: in SAMPLES; fc: Hz; logDt: log duration; c: Hz/s
FS: sampling rate; length: number of samples
"""
tc /= self.FS
Dt = np.exp(logDt)
t = np.arange(self.length)/self.FS
gaussian_window = np.exp(-0.5 * ((t - tc)/(Dt))**2)
complex_exp = np.exp(2j*np.pi * (c*(t-tc)**2 + fc*(t-tc)))
final_chirplet = gaussian_window * complex_exp
if not self.complex:
final_chirplet = np.real(final_chirplet)
if self.float32:
final_chirplet = final_chirplet.astype(np.float32)
return final_chirpletIn C++ (ACT.cpp), we made two critical changes to match principled ACT behavior and fix duration bias:
- Unit-energy normalization: every generated chirplet
gis L2-normalized to have unit energy. Without normalization, longer-duration atoms systematically win during dictionary search, forcinglogDtto the upper bound and degrading recovery. Normalization removes this bias and aligns the objective with correlation rather than raw energy. - Coefficient estimation scaling: with unit-energy atoms, the optimal coefficient is just the dot product between the signal and the chirplet. We removed an incorrect division by the sampling rate (FS) that had been applied previously. With normalization, this yields correct amplitudes and SNR.
Additional notes:
- Real output uses cosine consistently when
complex_mode=false(real part of the complex exponential), matching the Pythonnp.real(...)behavior. - The same unit-energy normalization is implemented in the SIMD code paths (
ACT_SIMD.cpp) using Apple Accelerate (vDSP) on macOS and NEON helpers on ARM for efficiency. - These changes were validated by strict synthetic tests (noiseless and 0 dB noisy), demonstrating accurate parameter recovery and SNR improvement, and eliminating the previous
logDtupper-bound bias.
#include "ACT_Accelerate.h"
// Create parameter ranges
ACT_CPU::ParameterRanges ranges(0, 2047, 8.0, // time: 0-2047, step 8
25.0, 50.0, 2.0, // freq: 25-50Hz, step 2Hz
-3.0, -1.0, 0.5, // duration: log scale
-10.0, 10.0, 5.0);// chirp rate: ±10 Hz/s
// Initialize ACT_Accelerate (uses Accelerate on macOS; BLAS on Linux)
ACT_Accelerate act(256.0, 2048, ranges, /*verbose=*/true);
act.generate_chirplet_dictionary();
// Analyze signal
auto result = act.transform(signal, 5); // Find top 5 chirplets#include "ACT_MLX.h"
ACT_CPU::ParameterRanges ranges(/* ... */);
ACT_MLX act(256.0, signal_length, ranges, /*verbose=*/true);
act.generate_chirplet_dictionary();
auto result = act.transform(eeg_signal, 10);- Eigen to guarantee contiguous memory layout for vectors and matrix operations
- AlgLib for BFGS optimization
- BLAS/LAPACK for CPU backends
- Apple Accelerate for macOS
- Apple MLX for GPU acceleration (float32)
- linenoise for example EEG analyzer
- CMake for building
numpy,pybind11,scikit-build-corepytest(tests),pytest-timeout(optional)
ACT_cpp/
├── actlib/
│ ├── include/ # Public headers (ACT_CPU.h, ACT_Accelerate.h, ACT_MLX.h, ...)
│ ├── src/ # Core sources (ACT_CPU.cpp, ACT_Accelerate.cpp, ACT_MLX.cpp, ...)
│ ├── test/ # C++ tests & profiling (test_*.cpp, profile_act*.cpp, ...)
│ └── lib/ # Vendored/third-party libs used by C++
│ ├── alglib/ # ALGLIB numerical library
│ └── mlx/ # MLX submodule (C++)
│ ├── build/ # Local build directory (generated)
│ └── install/ # Headers/libs/metallib (generated)
├── apps/
│ └── eeg_act_analyzer/
│ └── src/ # Interactive EEG CLI analyzer
├── python/
│ ├── act_bindings/ # pybind11 extension (pyact.mpbfgs)
│ ├── act_mlp/ # EEG feature extraction + MLP utilities
│ └── rbf/ # Experimental RBF trainer
├── data/ # Sample data (CSV, JSON, ndjson)
├── docs/ # Project documentation
├── research/ # Papers and references
├── scripts/ # setup_mlx.sh, build_pyact_mlx.sh, etc.
├── web_apps/ # Demos (e.g., p5js)
├── Makefile # Native build system
└── README.md # This file
Adapted from the python code at : https://github.com/amanb2000/Adaptive_Chirplet_Transform
Based on the seminal paper:
Mann, S., & Haykin, S. (1992). The chirplet transform: A generalization of Gabor's logon. Vision Interface, 92, 205-212.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Ensure all tests pass:
make testandpython3 -m pytest -q python/act_bindings/tests - Submit a pull request
- MLX header not found (
mlx/mlx.h): Ensure you ranscripts/setup_mlx.sh. Pass MLX paths to CMake via-DMLX_INCLUDE,-DMLX_LIB, and-DMLX_LINK=-lmlx. The Python binding printspyact: MLX_INCLUDE=...during configure. - Pip
--config-settingsflags ignored: Each-Dmust be a separate--config-settings=cmake.args=...entry, or use theCMAKE_ARGSenvironment variable. lipo: can't figure out the architecture type of: .../pyenv/shims/cmake: Noisy but harmless if the build proceeds. Prefer a Homebrew CMake ahead of pyenv shims inPATHif desired.- No MLX speedup observed: The MLX path accelerates the coarse search for float32 (
ACT_MLX_f). Double precision falls back to CPU. In Python,ActMLXEngineuses float32 internally; ensure you built the wheel withUSE_MLX=ON.
- Improve the greedy matching pursuit algorithm
Same as parent project - see main LICENSE file.