-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IDEA] Phase/Polarity adjust #39
Comments
Thank you for the references! In what context do the phase/polarity errors occur? The spectrogram-based recognizers (everything except for the correlation recognizer) are much too inaccuracurate time-wise to account for phase. They are accurate to about 0.04 seconds with the default settings. I would think that the correlation recognizer would properly handle phase? Especially if you were to use the correlation recognizer during the fine_align step? Could you send audio files and describe the code methods so I can reproduce the issue? |
Well, as very well explained above "resources articles", phase/polarity issues occours when a live audio performance is multitracked from 2+ different source points. To have a Check out this photo: As you can easily understand, after the synching of all audio tracks (expecially with cameras' ones) it's is quite likely that phase cancellations could be generated. Here's an interesting explaination video about phase/polarity: Note that some commercial A/V synch software - like PluralEyes, for example - does automatically performs audio drift correction (a kind of phase/polarity fix ?) when needed. Again, dunno if it falls within the scope of the project, but would be certainly very useful to have. Last but not least, I hope that some of the mentioned phase correction softwares' authors can provide expertise on the subject for phase fixing techniques/details. Thanks in advance. |
Have you observed this problem with specific audio files or using specific recognizers? Again, correlation should accurately account for phase. The other recognizers are way too inaccurate time-wise due to the spectrograms for phase issues to be an addressable concern. I don't think addressing polarity in audalign would yield meaningful results and that it would be better addressed in a DAW afterward. Are you suggesting that phase alignment be applied after every alignment? |
With multi-mic recordings, it is rather common to flip the polarity of some channels. A common example is snare drum top/bottom, mic'in. Phase cancellation (comb filtering) is rather obvious in that case. It other cases it be more subtle. e.g. when using a figure-8 mic. One can aid detection of which polarities to invert by correlating channels. Harrison Mixbus for example has a built-in tool for this: https://youtu.be/f_f8G5tnkfk?t=272 Phase rotation or sub-sample alignment is of no real concern for alignment. I don't know audalign, so I cannot judge if such a feature would be better addressed there or in a DAW. |
I have recorded dozens of live music shows, but I have been able to listen phase issues with my ears in huge stage or classical music performances. Of course, phase and/or polarity correction/optimization must be performed AFTER alignment - that's why I was wondering if it could fit within the project's scope - but certainly they should be user-selectable additional options and not done by default. @x42 Thanks for your interesting contributions which allowed me to discover Music Alignment Tool CHest ! |
Thanks again for the resources! I'll definitely look into a post-processing phase alignment function |
Bump. Is "AI" your friend ? This interesting @karisigurd4's deep learning project that aims to solve this problem by leveraging deep learning techniques to automatically correct phase discrepancies: Hope to test it soon ! |
Looks neat! I'll see if I can incorporate it |
Well, you may incorporate the inferencer but not the trainer (I believe), so it could be a not-that-good move. For a non-AI softwares like audalign I would go with the "classic" approach... Anyway I've added some more resources in HyMPS project \ AUDIO \ Treatments \ Phasing if you need. EDIT |
Thanks for the mention! My university dissertation focused on comparing DSP versus AI-based auto phase alignment techniques. I explored a variety of methods, including:
If you're interested in AI approaches, I highly recommend checking out this repository: https://github.com/abargum/diff-apf - their work is a great resource. My own repo still needs a bit of cleanup (I've not touched it since leaving university), but it contains all the tools and information necessary for building a DNN that leverages phase features as an input, to then output the all-pass filter parameters: https://github.com/harveyf2801/DNNAutoAlign. Please check out any of my 'AutoAlign' repos for examples of alignment techniques. Feel free to reach out if you have any questions or need further guidance! |
Hi there,
dunno if it falls within the scope of the project but often, after the aligning, some phase/polarity "errors" could degrade the recording.
Here's a couple of interesting resources about those issues:
Dunno if these softwares may help...
Last but not least, here's a very interesting research about phase recovery by @magronp:
Phase recovery with Bregman divergences for audio source separation
Hope that inspires !
The text was updated successfully, but these errors were encountered: