Open
Description
At the GRIS, we want to use audio descriptors to generate trajectories. Based on the "describe" example, I can analyze the audio signal in real time using JUCE and the FluCoMa library (tip of main).
But I'm looking to detect events (onset detection) with precision on the scale of audio samples, and I'm not sure which class to use, nor how to use it (fluid::algorithm::OnsetDetectionFunctions or fluid::algorithm::OnsetSegmentation).
The results I get tell me whether or not an event has been detected in the analyzed window. But is it possible to detect precisely at which audio sample the event occurs in the analyzed window?
Thanks !
Metadata
Metadata
Assignees
Labels
No labels