Add synthetic recording test framework and improve frame converter reliability #1471
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
introduces a comprehensive synthetic testing framework for the recording pipeline, allowing CI testing without physical hardware. Also includes several reliability and thread-safety improvements to the frame converter infrastructure.
Greptile Summary
This PR introduces a comprehensive synthetic testing framework for the recording pipeline and significantly improves frame converter reliability through thread-safety enhancements and performance optimizations.
Key Changes:
test_sourcesmodule provides synthetic video and audio generation with multiple test patterns (SMPTE bars, color gradients, frame counters, waveforms), validation framework to verify recording quality, A/V sync checking, and fragment integrity verificationSwscaleConverternow usesthread_localcontexts to avoid data races when used from multiple threads,VideoToolboxConverterprotects session access withMutex, all converters implementconvert_into()for buffer reuseVideoFramePoolreduces allocation overhead,AsyncConverterPoolintegrates pooling to reuse output buffers, significant performance improvement for high-throughput scenariosHighThroughputpreset forces software encoder when needed, proper color metadata (BT.709) now set on encoded streams, frame pooling in H264 encoder for conversion buffer reuseImpact:
This enables automated testing of the recording pipeline in CI environments and resolves thread-safety issues that could cause data races in multi-threaded converter usage. The frame pooling significantly reduces memory allocation overhead in high-throughput scenarios.
Confidence Score: 4/5
thread_localandMutexappropriately. Frame pooling is a solid optimization. The encoder selection logic with hardware capability estimation is thoughtful. One minor concern: theSendableContextwrapper uses unsafe code withUnsafeCellbut is justified becauseThreadLocalensures each thread gets its own context. The validation framework is thorough. Overall high quality work that significantly enhances testability and reliability.crates/frame-converter/src/swscale.rs(unsafe code withUnsafeCell),crates/enc-ffmpeg/src/video/h264.rs(encoder selection heuristics), and validation logic incrates/recording/src/test_sources/validation.rsImportant Files Changed
Sequence Diagram
sequenceDiagram participant Test as Test Runner participant Video as TestPatternVideoSource participant Audio as SyntheticAudioSource participant Pipeline as OutputPipeline participant Pool as AsyncConverterPool participant Converter as FrameConverter participant Encoder as H264Encoder participant Validator as RecordingValidator Test->>Video: setup(config, video_tx) Video->>Video: spawn video generator task Test->>Audio: setup(config, audio_tx) Audio->>Audio: spawn audio generator task Test->>Pipeline: start recording loop Frame Generation Video->>Video: generate_video_frame(pattern, frame_number) Video->>Pipeline: send(FFmpegVideoFrame) Audio->>Audio: generate_audio_samples(generator) Audio->>Pipeline: send(AudioFrame) end Pipeline->>Pool: submit(frame, sequence) Pool->>Pool: get frame from pool (if enabled) Pool->>Converter: convert_into(input, pooled_output) alt Thread-Safe Conversion Converter->>Converter: SwscaleConverter: get_or_create thread_local context Converter->>Converter: VideoToolboxConverter: lock session, convert Converter->>Converter: D3D11Converter: use dedicated device end Pool->>Pipeline: return ConvertedFrame Pipeline->>Encoder: queue_frame(converted) alt Hardware Selection Encoder->>Encoder: estimate_hw_encoder_max_fps() Encoder->>Encoder: requires_software_encoder() alt High Throughput Needed Encoder->>Encoder: use software encoder (libx264) else Hardware Can Keep Up Encoder->>Encoder: use hardware encoder (videotoolbox/nvenc) end end Encoder->>Encoder: reuse converted_frame_pool Encoder->>Encoder: encode with color metadata Encoder->>Pipeline: write encoded packet Test->>Pipeline: stop recording Pipeline->>Video: stop() Pipeline->>Audio: stop() Pipeline->>Encoder: flush() Test->>Validator: validate(output_path, test_config) Validator->>Validator: probe video/audio streams Validator->>Validator: check frame count, duration Validator->>Validator: verify A/V sync offset Validator->>Validator: validate fragment integrity Validator->>Test: return ValidationResult