Skip to content

Conversation

@richiemcilroy
Copy link
Member

@richiemcilroy richiemcilroy commented Dec 30, 2025

Summary

  • Refactored frame processing and shared buffer handling for better performance
  • Added frame timing metadata to frame pipeline
  • Optimized ImageData object reuse in socket utility
  • Added stride correction worker for image data processing
  • Improved WebGPU frame state reset and buffer borrowing
  • Refactored export UI and frame rendering logic

Originally merged to wrong branch in #1472.

Greptile Summary

  • Implements a comprehensive video playback performance overhaul replacing single-frame rendering with a queue-based system that buffers up to 5 frames with intelligent timing synchronization and late frame dropping (>40ms behind)
  • Moves stride correction (row padding removal) from Rust to TypeScript workers and optimizes memory usage by implementing zero-copy buffer access patterns, cached ImageData objects, and Arc-based reference counting in FFmpeg decoder
  • Adds frame timing metadata (frame_number, target_time_ns) throughout the pipeline from Rust rendering to frontend frame workers to enable precise frame synchronization and playback timing control

Important Files Changed

Filename Overview
apps/desktop/src/utils/frame-worker.ts Completely refactored frame rendering from single-frame to queue-based system with timing synchronization; critical for smooth video playback
apps/desktop/src-tauri/src/editor_window.rs Replaced watch channels with MPSC channels and moved stride correction to frontend; affects core frame pipeline architecture
apps/desktop/src/utils/socket.ts Major buffer size increase (4×8MB to 6×16MB) and ImageData caching optimization; potential memory impact needs review
apps/desktop/src-tauri/src/frame_ws.rs Added MPSC WebSocket server and frame metadata; contains potential mutex deadlock in new implementation
apps/desktop/src/utils/stride-correction-worker.ts New worker for stride correction moved from Rust; architectural shift requiring careful performance validation

Confidence score: 3/5

  • This PR introduces significant architectural changes to video frame processing that could impact performance and stability if not thoroughly tested
  • Score reflects complex timing synchronization logic, memory management changes (buffer size increases, zero-copy patterns), and architectural shifts (Rust to TypeScript processing) that require careful validation
  • Pay close attention to frame_ws.rs for the potential mutex deadlock issue and socket.ts for the substantial buffer size increase that could affect memory usage

Sequence Diagram

sequenceDiagram
    participant User
    participant Editor as Editor Component
    participant EditorInstance as EditorInstance
    participant Renderer as Renderer
    participant FrameWS as Frame WebSocket
    participant Worker as Frame Worker
    participant Decoder as Video Decoder
    
    User->>Editor: "Load project for editing"
    Editor->>EditorInstance: "create editor instance"
    EditorInstance->>Decoder: "spawn video decoders"
    EditorInstance->>Renderer: "initialize renderer"
    EditorInstance->>FrameWS: "create frame websocket"
    FrameWS->>Worker: "init shared buffer & canvas"
    
    User->>Editor: "seek to frame position"
    Editor->>EditorInstance: "emit render frame event"
    EditorInstance->>Decoder: "get frames for time"
    Decoder-->>EditorInstance: "return decoded frames"
    EditorInstance->>Renderer: "render frame with uniforms"
    Renderer->>Renderer: "composite layers (display, camera, cursor, effects)"
    Renderer-->>EditorInstance: "rendered frame data"
    EditorInstance->>FrameWS: "send frame via websocket"
    FrameWS->>Worker: "process frame with metadata"
    Worker->>Worker: "decode and queue frame"
    Worker-->>Editor: "frame rendered notification"
    
    User->>Editor: "start playback"
    Editor->>EditorInstance: "start playback with fps"
    EditorInstance->>EditorInstance: "spawn playback loop"
    loop Playback Loop
        EditorInstance->>Decoder: "get frames for current time"
        EditorInstance->>Renderer: "render frame"
        EditorInstance->>FrameWS: "send frame"
        FrameWS->>Worker: "process frame"
        Worker-->>Editor: "display frame"
        EditorInstance->>Editor: "update playhead position"
    end
    
    User->>Editor: "export video"
    Editor->>EditorInstance: "render video to channel"
    loop Export Loop
        EditorInstance->>Decoder: "get segment frames"
        EditorInstance->>Renderer: "render with export settings"
        Renderer-->>EditorInstance: "rendered frame"
        EditorInstance->>Editor: "export progress update"
    end
Loading

Context used:

  • Context from dashboard - CLAUDE.md (source)
  • Context from dashboard - AGENTS.md (source)

@richiemcilroy richiemcilroy merged commit 472847c into main Dec 30, 2025
16 of 17 checks passed
Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Comments (1)

  1. apps/desktop/src-tauri/src/frame_ws.rs, line 58 (link)

    style: Inconsistent logging - both println! and tracing::info! are used for the same message

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

14 files reviewed, 5 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +46 to +53
const result = correctionBuffer.slice(0, expectedLength);
const response: StrideCorrectionResponse = {
type: "corrected",
buffer: result.buffer,
width,
height,
};
self.postMessage(response, { transfer: [result.buffer] });
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: The .slice() operation creates a full copy of the corrected data, negating the performance benefits of buffer reuse. Consider using correctionBuffer.subarray(0, expectedLength).slice() or transfer the original buffer directly if the worker doesn't need it anymore.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/desktop/src/utils/stride-correction-worker.ts
Line: 46:53

Comment:
**style:** The `.slice()` operation creates a full copy of the corrected data, negating the performance benefits of buffer reuse. Consider using `correctionBuffer.subarray(0, expectedLength).slice()` or transfer the original buffer directly if the worker doesn't need it anymore.

<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +178 to +199
let frame_opt = {
let mut rx = frame_rx.lock().await;
tokio::select! {
biased;
msg = socket.recv() => {
match msg {
Some(Ok(Message::Close(_))) | None => {
tracing::info!("WebSocket closed");
break;
}
Some(Ok(_)) => {
continue;
}
Some(Err(e)) => {
tracing::error!("WebSocket error: {:?}", e);
break;
}
}
},
frame = rx.recv() => frame,
}
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Potential deadlock: holding the mutex lock across the entire tokio::select! block could block other tasks trying to access the frame receiver. Is this intentional to ensure atomic processing of each frame, or could the lock scope be reduced?

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/desktop/src-tauri/src/frame_ws.rs
Line: 178:199

Comment:
**logic:** Potential deadlock: holding the mutex lock across the entire `tokio::select!` block could block other tasks trying to access the frame receiver. Is this intentional to ensure atomic processing of each frame, or could the lock scope be reduced?

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +436 to +444
for (let i = 0; i < frameQueue.length; i++) {
const frame = frameQueue[i];
const frameNum = frame.timing.frameNumber;

const isSeek =
playbackStartTime !== null &&
lastRenderedFrameNumber >= 0 &&
(frameNum < lastRenderedFrameNumber ||
frameNum > lastRenderedFrameNumber + 30);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: The seek detection logic uses a hardcoded threshold of 30 frames which may not work correctly for all frame rates. Should the seek detection threshold be based on frame rate or time duration instead of a fixed frame count?

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/desktop/src/utils/frame-worker.ts
Line: 436:444

Comment:
**style:** The seek detection logic uses a hardcoded threshold of 30 frames which may not work correctly for all frame rates. Should the seek detection threshold be based on frame rate or time duration instead of a fixed frame count?

<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>

How can I resolve this? If you propose a fix, please make it concise.

fallbackIndex = i;
}

if (diffMs < -LATE_THRESHOLD_MS && !isSeek) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Frame dropping threshold of 40ms may be too aggressive for lower frame rates where frames naturally arrive >40ms apart. Should LATE_THRESHOLD_MS be adjusted based on the target frame rate to avoid dropping frames that aren't actually late?

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/desktop/src/utils/frame-worker.ts
Line: 466:466

Comment:
**logic:** Frame dropping threshold of 40ms may be too aggressive for lower frame rates where frames naturally arrive &gt;40ms apart. Should LATE_THRESHOLD_MS be adjusted based on the target frame rate to avoid dropping frames that aren't actually late?

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants