Description
I'm noticing an odd issue when attempting to benchmark the performance of Porcupine models against audio files with different characteristics (background noise, SNR, etc.). Specifically, there seems to be significant variation in the model's true positive rate simply by changing the temporal spacing of the wake word in the testing data. For example, when using the "Alexa" dataset and pre-trained "alexa_linux.ppn" from the latest version of Porcupine, I see the true-positive rate of the model behave as shown below:
Happy to provide additional details and even the test files that were created, if that would be useful.
I've also noticed similar performance variations to wake word temporal separation using custom Porcupine models and manually recorded test clips, so it seems possible that the issue is not limited to just the "alexa_linux_ppn" model.
Expected behaviour
The model should perform similarly regardless of the temporal separation of wake words in an input audio stream.
Actual behaviour
The model shows variations of up to 10 percentage points in the true positive rate depending on the temporal separation of wake words.
Steps to reproduce the behaviour
-
Use the "Alexa" dataset from here
-
Use the functions in mixer.py as a foundation, create test clips of varying lengths by mixing with background noise from the DEMAND dataset (specifically, the "DLIVING" recording). The SNR was fixed at 10 db, and the same segment of the noise audio file was used for every test clip. Each test clip was converted to a 16-bit, 16khz, single-channel WAV format.
-
Initialize Porcupine, and run the test clips sequentially through the model using the default frame size (512) and default sensitivity level (0.5). Capture all of the true positive predictions and divide by the total number of test clips to calculate the true positive rate.