-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use ffmpeg to send input to opensmile to get features? #35
Comments
There is a |
Can you please elaborate? I am getting some trouble understanding where and what to change.
|
Documentation on SMILEapi is unfortunately rather sparse. Basically, it boils down to:
SMILEapi is a C API for maximum compatibility with other languages. openSMILE includes a Python wrapper which is recommended if you are working in Python. You might also want to take a look at the implementation of https://github.com/audeering/opensmile-python which under the hood uses SMILEapi via the Python wrapper. |
Is there any way to get the data per frameTime in realtime for prosody, mfcc and egemaps in opensmile? Also, What will be the way to use ffmpeg with the api? I see that I have to pass the data(audio file) generated by ffmpeg or can I stream data via ffmpeg and pass it. |
When using SMILEapi in combination with
You can stream audio in real-time from FFmpeg to openSMILE. You'll need to set up the audio recording with FFmpeg, and then pass each individual buffer of audio received from FFmpeg to openSMILE via the SMILEapi function smile_extaudiosource_write_data. |
What will be the way to use FFmpeg with the python API? [waveIn:cFFmpegSource] |
To get started with SMILEapi, see the API definition and comments in https://github.com/audeering/opensmile/blob/master/progsrc/smileapi/python/opensmile/SMILEapi.py. See also the help in the openSMILE documentation on components |
We have ffmpeg command ready to decode the audio which is coming from the UDP port, but How do we integrate the command into the opensmile python API? |
We have ffmpeg command ready to decode the audio which is coming from the UDP port, but How do we integrate the command into the opensmile python API? can anyone help me with the above query? |
I want to use FFMEPG to send input to the opensmile and generate the features from egemaps, prosody or mfcc.
I am able to modify the config files to get the live input but now I want to take the input from a video source and extract audio via ffmpeg and send it to opensmile.
The text was updated successfully, but these errors were encountered: