This library represents a simple pipeline integrating various models of auditory periphery developed over the years in academia. The pipeline can be used to combine models representing various stages of auditory perception for extracting features for machine learning algorithms.
Detailed documentation for the auditory library can be found here.
If you use the code in your research, please cite the following paper:
@inproceedings{eidos2020,
title = {{Eidos: An Open-Source Auditory Periphery Modeling Toolkit and
Evaluation of Cross-Lingual Phonemic Contrasts}},
author = {Alexander Gutkin},
booktitle = {Proc. of 1st Joint Spoken Language Technologies for Under-Resourced
Languages (SLTU) and Collaboration and Computing for Under-Resourced
Languages (CCURL) Workshop (SLTU-CCURL 2020)},
pages = {9--20},
year = {2020},
month = may,
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
url = {https://aclanthology.org/2020.sltu-1.2/},
}
Please also cite the reference publications describing the particular algorithms from this collection that you use. These can be found in the respective model directories under third_party/audition/models.
This is not an officially supported Google product.