-
Notifications
You must be signed in to change notification settings - Fork 127
Allow to switch audio device module #650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
/// | ||
/// This method must be called before the peer connection is initialized. Changing the module type after | ||
/// initialization is not supported and will result in an error. | ||
static func set(audioDeviceModuleType: AudioDeviceModuleType) throws { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It effectively means you need to call this one before accessing AudioManager.shared
, so somewhere in App.init()
etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it only works when called early at the moment. Even before AudioManager.shared, since peerConnection gets initialized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean e.g. in our example app even if you put it before .shared
e.g. here:
// here
AudioManager.shared.onDeviceUpdate = { [weak self] _ in
it's not enough as SwiftUI may create .shared
for you (e.g. in form of computed props).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RTC.audioDeviceModule.outputDevices.map { AudioDevice(ioDevice: $0) } | ||
#else | ||
[] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, this is a no-op because of RTC limitations for iOS?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is a limitation at the moment.
I think we can simulate this to manipulate the AVAudioSession output port, but I'm not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can comment it - just for the future generations.
var bypassVoiceProcessing: Bool = false | ||
} | ||
|
||
static let pcFactoryState = StateSync(PeerConnectionFactoryState()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: probably you can get rid of StateSync
vs actor
here if the mutations are local.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will it require to be async ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Allow to use Legacy (WebRTC default) AudioDeviceModule instead of AVAudioEngine based AudioDeviceModule.
Currently when using legacy ADM, AVAudioSession category switching is not handled automatically, so switching to .playAndRecord is necessary when using the mic.