-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FFT system module and new audiovisual demo #38
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was implementing some of my review changes when I noticed the following in Debug: there is a noticeable delay between the two output channels in the audiovisual demo. This is less apparant in Release mode but still there. If I change the number of outputs to 1 using the audio service configuration I don't hear any delay. Is this is an output thread / sync bug?
Note that I made sure it's not related to recent audio / threading changes -> issue persists between old and new version of NAP. When I don't connect the FFTNode
inside the FFTAudioNodeComponentInstance
the lag is gone:
//mFFTNode->mInput.connect(*mInput->getOutputForChannel(mResource->mChannel));
This (from what I can tell) shows that the processing step in the audio thread introduces a noticeable lag on the channel selected for FFT analysis. When I introduce a second FFTAudioNode
for channel 1 the delay is gone, because the latency introduced on that channel by the FFT node is the same. I profiled the latency, which is not related to the mutex - the copy of the buffer, in debug, is enough to hear the delay.
void FFTBuffer::supply(const std::vector<float>& samples)
{
// Copy samples
{
std::lock_guard<std::mutex> lock(mSampleBufferMutex);
mSampleBufferA = samples;
}
// TODO: Use notification to transform data on different thread!
mDirty = true;
}
@stijnvanbeek maybe you can pitch in on what to do here?
@lshoek: I implemented many of my suggested (review) changes and optimized the threaded copy of the buffer by introducing an intermediate buffer that the original samples are moved into before being processed: b14bd70. I recommend going over my changes for validation.
if (resource == nullptr) | ||
continue; | ||
|
||
if (resource->mID == "RenderStars") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we had something in place for rendering cube-maps on initialization, why this verbose render step on frame 1 here? Also: getObjects()
should not be used, it is considered bad practice because it could return recently replaced (not active) targets, which in this case is extra bad because we're only doing it on frame one?
|
||
namespace nap | ||
{ | ||
bool RenderAudioRoadComponentInstance::init(utility::ErrorState& errorState) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't it better to simply create a new render audio road component, instead of deriving from renderable mesh component? Since you override the draw method and require a specific mesh type.
Very nice project and module! :) I would contemplate to (optionally) perform the FFT within the audio thread (in the FFTNode::process method) and introduce also the possibility of a reverse FFT to allow for frequency domain processing of the audio. It would be cool to be able to insert an FFTProcess polymorphically that performs any frequency domain processing. Maybe we can cooperate on this. :) About the FFTBuffer::supply() method and the perceived delay: It is good to keep in mind though that both the mutex lock as the deep copy of the buffer in the supply() method are bad practice on audio threads:
The solution could be to get rid of the mutex and the buffer copying and use the moody camel single producer single consumer queue to queue floats from the audio thread and dequeue them in the FFTBuffer::transform() function. This talk on realtime audio processing in C++ is nice to watch: |
There are no clicks and pops, only a (very) noticeable delay.
This is rather far fetched. The vector (on the stack) re-allocates memory only when the the new size is greater than the old capacity. Considering the stream is relatively stable this occurs at the beginning. Without profiling I am dismissing this statement and go with what the standard tells us. A
Removing the lock has no impact on the problem, making this a non issue (irrelevant) to the latency issue described above. Also: you could (should) use std::mutex::try_lock instead of a regular lock, to avoid stalling the audio thread. This is preferred because the fft analysis can occur at a lower frequency because it's generally used for visual related elements and feedback and therefore better suited to be performed on the main or optionally (different) thread, where rendering takes more time than processing (update).
Well, considering the above I'm not sure this will solve the issue - my suspicion is that something else is not quite right. I find it hard to believe that a single |
Just letting you know I've had another look at this just now, revamping a dependent project now with a Motu M2 audio interface. Addressing issues as I go and will proceed this week. |
@lshoek: Moving this PR from private to public so @stijnvanbeek can review these changes as well.
From the original PR:
audiovisual
demo that demonstrates various aspects of new FFT possibilities (and compute &naprenderadvanced
)