Hi there,
I am interested in using these models for the simple classification of audio segments according to phases of inhalation and exhalation. My goal is to run online studies where I monitor breathing via audio microphone and segment cognitive performance according different breathing cycles, or maybe even differences in inhalation vs. exhalation length.
I find it still rather difficult as an outsider to get an intuitive sense of how I can apply your pre-trained models to give me the audio segmentation, although from what I can tell, your models are clearly up to the task of doing that!
Would dearly appreciate some help or advice, great work!