Skip to content

Allow to switch audio device module #650

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 22 commits into from
Apr 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion LiveKitClient.podspec
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Pod::Spec.new do |spec|

spec.source_files = "Sources/**/*"

spec.dependency("LiveKitWebRTC", "= 125.6422.26")
spec.dependency("LiveKitWebRTC", "= 125.6422.28")
spec.dependency("SwiftProtobuf")
spec.dependency("Logging", "= 1.5.4")
spec.dependency("DequeModule", "= 1.1.4")
Expand Down
2 changes: 1 addition & 1 deletion Package.swift
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ let package = Package(
],
dependencies: [
// LK-Prefixed Dynamic WebRTC XCFramework
.package(url: "https://github.com/livekit/webrtc-xcframework.git", exact: "125.6422.26"),
.package(url: "https://github.com/livekit/webrtc-xcframework.git", exact: "125.6422.28"),
.package(url: "https://github.com/apple/swift-protobuf.git", from: "1.26.0"),
.package(url: "https://github.com/apple/swift-log.git", from: "1.5.4"),
.package(url: "https://github.com/apple/swift-collections.git", from: "1.1.0"),
Expand Down
2 changes: 1 addition & 1 deletion Package@swift-5.9.swift
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ let package = Package(
],
dependencies: [
// LK-Prefixed Dynamic WebRTC XCFramework
.package(url: "https://github.com/livekit/webrtc-xcframework.git", exact: "125.6422.26"),
.package(url: "https://github.com/livekit/webrtc-xcframework.git", exact: "125.6422.28"),
.package(url: "https://github.com/apple/swift-protobuf.git", from: "1.26.0"),
.package(url: "https://github.com/apple/swift-log.git", from: "1.6.2"), // 1.6.x requires Swift >=5.8
.package(url: "https://github.com/apple/swift-collections.git", from: "1.1.0"),
Expand Down
2 changes: 1 addition & 1 deletion Package@swift-6.0.swift
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ let package = Package(
],
dependencies: [
// LK-Prefixed Dynamic WebRTC XCFramework
.package(url: "https://github.com/livekit/webrtc-xcframework.git", exact: "125.6422.26"),
.package(url: "https://github.com/livekit/webrtc-xcframework.git", exact: "125.6422.28"),
.package(url: "https://github.com/apple/swift-protobuf.git", from: "1.26.0"),
.package(url: "https://github.com/apple/swift-log.git", from: "1.6.2"), // 1.6.x requires Swift >=5.8
.package(url: "https://github.com/apple/swift-collections.git", from: "1.1.0"),
Expand Down
55 changes: 55 additions & 0 deletions Sources/LiveKit/Audio/Manager/AudioManager+ModuleType.swift
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
/*
* Copyright 2025 LiveKit
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

#if swift(>=5.9)
internal import LiveKitWebRTC
#else
@_implementationOnly import LiveKitWebRTC
#endif

public enum AudioDeviceModuleType {
/// Use AVAudioEngine-based AudioDeviceModule internally which will be used for all platforms.
case audioEngine
/// Use WebRTC's default AudioDeviceModule internally, which uses AudioUnit for iOS, HAL APIs for macOS.
case platformDefault
}

extension AudioDeviceModuleType {
func toRTCType() -> RTCAudioDeviceModuleType {
switch self {
case .audioEngine: return RTCAudioDeviceModuleType.audioEngine
case .platformDefault: return RTCAudioDeviceModuleType.platformDefault
}
}
}

public extension AudioManager {
/// Sets the desired `AudioDeviceModuleType` to be used which handles all audio input / output.
///
/// This method must be called before the peer connection is initialized. Changing the module type after
/// initialization is not supported and will result in an error.
///
/// Note: When using .platformDefault, AVAudioSession will not be automatically managed.
/// Ensure to set session category when accessing the mic:
/// `try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [])`
static func set(audioDeviceModuleType: AudioDeviceModuleType) throws {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It effectively means you need to call this one before accessing AudioManager.shared, so somewhere in App.init() etc.

Copy link
Member Author

@hiroshihorie hiroshihorie Apr 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it only works when called early at the moment. Even before AudioManager.shared, since peerConnection gets initialized.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean e.g. in our example app even if you put it before .shared e.g. here:

        // here
        AudioManager.shared.onDeviceUpdate = { [weak self] _ in

it's not enough as SwiftUI may create .shared for you (e.g. in form of computed props).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it needs to be earlier at the moment. For our example app, it works here.
スクリーンショット 2025-04-03 14 36 10

// Throw if pc factory is already initialized.
guard !RTC.pcFactoryState.isInitialized else {
throw LiveKitError(.invalidState, message: "Cannot set this property after the peer connection has been initialized")
}
RTC.pcFactoryState.mutate { $0.admType = audioDeviceModuleType }
}
}
55 changes: 48 additions & 7 deletions Sources/LiveKit/Audio/Manager/AudioManager.swift
Original file line number Diff line number Diff line change
Expand Up @@ -106,10 +106,10 @@
public var localTracksCount: Int = 0
public var remoteTracksCount: Int = 0
public var isSpeakerOutputPreferred: Bool = true
public var customConfigureFunc: ConfigureAudioSessionFunc?

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-15, 16.2, macOS,variant=Mac Catalyst)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-15, 16.2, macOS,variant=Mac Catalyst)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, macOS,variant=Mac Catalyst)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, macOS,variant=Mac Catalyst)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, iOS Simulator,OS=17.5,name=iPhone 15 Pro)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, iOS Simulator,OS=17.5,name=iPhone 15 Pro)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-15, 16.2, iOS Simulator,OS=18.1,name=iPhone 16 Pro, true)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-15, 16.2, iOS Simulator,OS=18.1,name=iPhone 16 Pro, true)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, tvOS Simulator,name=Apple TV)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, tvOS Simulator,name=Apple TV)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, visionOS Simulator,name=Apple Vision Pro)

'ConfigureAudioSessionFunc' is deprecated

Check warning on line 109 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, visionOS Simulator,name=Apple Vision Pro)

'ConfigureAudioSessionFunc' is deprecated
public var sessionConfiguration: AudioSessionConfiguration?

public var trackState: TrackState {

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-15, 16.2, macOS,variant=Mac Catalyst)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-15, 16.2, macOS,variant=Mac Catalyst)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, macOS,variant=Mac Catalyst)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, macOS,variant=Mac Catalyst)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, iOS Simulator,OS=17.5,name=iPhone 15 Pro)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, iOS Simulator,OS=17.5,name=iPhone 15 Pro)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-15, 16.2, iOS Simulator,OS=18.1,name=iPhone 16 Pro, true)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-15, 16.2, iOS Simulator,OS=18.1,name=iPhone 16 Pro, true)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, tvOS Simulator,name=Apple TV)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, tvOS Simulator,name=Apple TV)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, visionOS Simulator,name=Apple Vision Pro)

'TrackState' is deprecated

Check warning on line 112 in Sources/LiveKit/Audio/Manager/AudioManager.swift

View workflow job for this annotation

GitHub Actions / Build & Test (macos-14, 15.4, visionOS Simulator,name=Apple Vision Pro)

'TrackState' is deprecated
switch (localTracksCount > 0, remoteTracksCount > 0) {
case (true, false): return .localOnly
case (false, true): return .remoteOnly
Expand Down Expand Up @@ -163,21 +163,49 @@
public let defaultInputDevice = AudioDevice(ioDevice: LKRTCIODevice.defaultDevice(with: .input))

public var outputDevices: [AudioDevice] {
#if os(macOS)
RTC.audioDeviceModule.outputDevices.map { AudioDevice(ioDevice: $0) }
#else
[]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, this is a no-op because of RTC limitations for iOS?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this is a limitation at the moment.
I think we can simulate this to manipulate the AVAudioSession output port, but I'm not sure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can comment it - just for the future generations.

#endif
}

public var inputDevices: [AudioDevice] {
#if os(macOS)
RTC.audioDeviceModule.inputDevices.map { AudioDevice(ioDevice: $0) }
#else
[]
#endif
}

public var outputDevice: AudioDevice {
get { AudioDevice(ioDevice: RTC.audioDeviceModule.outputDevice) }
set { RTC.audioDeviceModule.outputDevice = newValue._ioDevice }
get {
#if os(macOS)
AudioDevice(ioDevice: RTC.audioDeviceModule.outputDevice)
#else
AudioDevice(ioDevice: LKRTCIODevice.defaultDevice(with: .output))
#endif
}
set {
#if os(macOS)
RTC.audioDeviceModule.outputDevice = newValue._ioDevice
#endif
}
}

public var inputDevice: AudioDevice {
get { AudioDevice(ioDevice: RTC.audioDeviceModule.inputDevice) }
set { RTC.audioDeviceModule.inputDevice = newValue._ioDevice }
get {
#if os(macOS)
AudioDevice(ioDevice: RTC.audioDeviceModule.inputDevice)
#else
AudioDevice(ioDevice: LKRTCIODevice.defaultDevice(with: .input))
#endif
}
set {
#if os(macOS)
RTC.audioDeviceModule.inputDevice = newValue._ioDevice
#endif
}
}

public var onDeviceUpdate: OnDevicesDidUpdate? {
Expand Down Expand Up @@ -222,8 +250,21 @@
/// It is valid to toggle this at runtime and AudioEngine doesn't require restart.
/// Defaults to `false`.
public var isVoiceProcessingBypassed: Bool {
get { RTC.audioDeviceModule.isVoiceProcessingBypassed }
set { RTC.audioDeviceModule.isVoiceProcessingBypassed = newValue }
get {
if RTC.pcFactoryState.admType == .platformDefault {
return RTC.pcFactoryState.bypassVoiceProcessing
}

return RTC.audioDeviceModule.isVoiceProcessingBypassed
}
set {
guard !(RTC.pcFactoryState.read { $0.isInitialized && $0.admType == .platformDefault }) else {
log("Cannot set this property after the peer connection has been initialized when using non-AVAudioEngine audio device module", .error)
return
}

RTC.audioDeviceModule.isVoiceProcessingBypassed = newValue
}
}

/// Bypass the Auto Gain Control of internal AVAudioEngine.
Expand Down Expand Up @@ -261,7 +302,7 @@
/// Audio buffers will flow into ``LocalAudioTrack/add(audioRenderer:)`` and ``capturePostProcessingDelegate``.
public func startLocalRecording() throws {
// Always unmute APM if muted by last session.
RTC.audioProcessingModule.isMuted = false
RTC.audioProcessingModule.isMuted = false // TODO: Possibly not required anymore with new libs
// Start recording on the ADM.
let result = RTC.audioDeviceModule.initAndStartRecording()
try checkAdmResult(code: result)
Expand Down
17 changes: 16 additions & 1 deletion Sources/LiveKit/Core/RTC.swift
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,14 @@ private class VideoEncoderFactorySimulcast: LKRTCVideoEncoderFactorySimulcast {
}

actor RTC {
struct PeerConnectionFactoryState {
var isInitialized: Bool = false
var admType: AudioDeviceModuleType = .audioEngine
var bypassVoiceProcessing: Bool = false
}

static let pcFactoryState = StateSync(PeerConnectionFactoryState())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: probably you can get rid of StateSync vs actor here if the mutations are local.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will it require to be async ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes


static let h264BaselineLevel5CodecInfo: LKRTCVideoCodecInfo = {
// this should never happen
guard let profileLevelId = LKRTCH264ProfileLevelId(profile: .constrainedBaseline, level: .level5) else {
Expand Down Expand Up @@ -81,13 +89,20 @@ actor RTC {
static let audioSenderCapabilities = peerConnectionFactory.rtpSenderCapabilities(forKind: kRTCMediaStreamTrackKindAudio)

static let peerConnectionFactory: LKRTCPeerConnectionFactory = {
// Update pc init lock
let (admType, bypassVoiceProcessing) = pcFactoryState.mutate {
$0.isInitialized = true
return ($0.admType, $0.bypassVoiceProcessing)
}

logger.log("Initializing SSL...", type: Room.self)

RTCInitializeSSL()

logger.log("Initializing PeerConnectionFactory...", type: Room.self)

return LKRTCPeerConnectionFactory(bypassVoiceProcessing: false,
return LKRTCPeerConnectionFactory(audioDeviceModuleType: admType.toRTCType(),
bypassVoiceProcessing: bypassVoiceProcessing,
encoderFactory: encoderFactory,
decoderFactory: decoderFactory,
audioProcessingModule: audioProcessingModule)
Expand Down
148 changes: 148 additions & 0 deletions Tests/LiveKitTests/Audio/AudioManagerTests.swift
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
/*
* Copyright 2025 LiveKit
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

@preconcurrency import AVFoundation
@testable import LiveKit
import LiveKitWebRTC
import XCTest

class AudioManagerTests: LKTestCase {
// Test legacy audio device module's startLocalRecording().
func testStartLocalRecordingLegacyADM() async throws {
// Use legacy ADM
try AudioManager.set(audioDeviceModuleType: .platformDefault)

// Ensure audio session category is `.playAndRecord`.
#if os(iOS) || os(tvOS) || os(visionOS)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [])
#endif

let recorder = try TestAudioRecorder()

let audioTrack = LocalAudioTrack.createTrack()
audioTrack.add(audioRenderer: recorder)

// Start recording
try AudioManager.shared.startLocalRecording()

// Record for 5 seconds...
try? await Task.sleep(nanoseconds: 5 * 1_000_000_000)

recorder.close()
AudioManager.shared.stopRecording()

// Play the recorded file...
let player = try AVAudioPlayer(contentsOf: recorder.filePath)
XCTAssertTrue(player.play(), "Failed to start audio playback")
while player.isPlaying {
try? await Task.sleep(nanoseconds: 1 * 100_000_000) // 10ms
}
}

// Confirm different behavior of Voice-Processing-Mute between macOS and other platforms.
func testConfirmGlobalVpMuteStateOniOS() async throws {
// Ensure audio session category is `.playAndRecord`.
#if !os(macOS)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [])
#endif

let e1 = AVAudioEngine()
try e1.inputNode.setVoiceProcessingEnabled(true)

let e2 = AVAudioEngine()
try e2.inputNode.setVoiceProcessingEnabled(true)

// e1, e2 both un-muted
XCTAssert(!e1.inputNode.isVoiceProcessingInputMuted)
XCTAssert(!e2.inputNode.isVoiceProcessingInputMuted)

// Mute e1, but e2 should be unaffected.
e1.inputNode.isVoiceProcessingInputMuted = true
XCTAssert(e1.inputNode.isVoiceProcessingInputMuted)

#if os(macOS)
// On macOS, e2 isn't affected by e1's muted state.
XCTAssert(!e2.inputNode.isVoiceProcessingInputMuted)
#else
// On Other platforms, e2 is affected by e1's muted state.
XCTAssert(e2.inputNode.isVoiceProcessingInputMuted)
#endif
}

// The Voice-Processing-Input-Muted state appears to be a global state within the app.
// We make sure that after the Room gets cleaned up, this state is back to un-muted since
// it will interfere with audio recording later in the app.
//
// Previous RTC libs would fail this test since, RTC was always invoking AudioDeviceModule::SetMicrophoneMuted(true)
func testVoiceProcessingInputMuted() async throws {
// Set VP muted state.
func setVoiceProcessingInputMuted(_ muted: Bool) throws {
let e = AVAudioEngine()
// VP always needs to be enabled to read / write the vp muted state
try e.inputNode.setVoiceProcessingEnabled(true)
e.inputNode.isVoiceProcessingInputMuted = muted
XCTAssert(e.inputNode.isVoiceProcessingInputMuted == muted)
print("Set vp muted to \(muted), and verified it is \(e.inputNode.isVoiceProcessingInputMuted)")
}

// Confirm if is VP muted.
func isVoiceProcessingInputMuted() throws -> Bool {
let e = AVAudioEngine()
// VP always needs to be enabled to read / write the vp muted state
try e.inputNode.setVoiceProcessingEnabled(true)
return e.inputNode.isVoiceProcessingInputMuted
}

// Ensure audio session category is `.playAndRecord`.
#if os(iOS) || os(tvOS) || os(visionOS)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [])
#endif

do {
// Should *not* be VP-muted at this point.
let isVpMuted = try isVoiceProcessingInputMuted()
print("isVpMuted: \(isVpMuted)")
XCTAssert(!isVpMuted)
}

let adm = AudioManager.shared

// Start recording, mic indicator should turn on.
print("Starting local recording...")
try adm.startLocalRecording()

// Wait for 3 seconds...
try? await Task.sleep(nanoseconds: 3 * 1_000_000_000)

// Set mute, mic indicator should turn off.
adm.isMicrophoneMuted = true

// Wait for 3 seconds...
try? await Task.sleep(nanoseconds: 3 * 1_000_000_000)

try adm.stopLocalRecording()

// Wait for 1 second...
try? await Task.sleep(nanoseconds: 1 * 1_000_000_000)

do {
// Should *not* be VP-muted at this point.
let isVpMuted = try isVoiceProcessingInputMuted()
print("isVpMuted: \(isVpMuted)")
XCTAssert(!isVpMuted)
}
}
}
Loading
Loading