Skip to content

Conversation

@BuffMcBigHuge
Copy link
Collaborator

@BuffMcBigHuge BuffMcBigHuge commented Dec 3, 2025

Spout Send & Receive

This PR adds bidirectional Spout support to Scope, enabling real-time video texture sharing with Spout-compatible applications like TouchDesigner, OBS, and other Windows-based media tools.

Overview

Spout is a Windows-only protocol for sharing OpenGL textures between applications with minimal latency. This implementation adds both Spout input (receiving frames from external apps) and Spout output (sending processed frames to external apps).

Features

Spout Input (Receiver)

  • Receive video frames from Spout senders
  • Configurable sender name matching (empty string connects to any active sender)
  • Automatic buffer resizing when sender resolution changes
  • Frame rate limiting (30 FPS target) to match pipeline processing rate (Needs review)
  • Integrated as a new input mode alongside "Video" and "Camera"

Spout Output (Sender)

  • Send processed frames to Spout receivers
  • Customizable sender name (default: "Scope")
  • Independent of WebRTC output queue
  • Automatic format conversion (RGB/RGBA, float/uint8)

Technical Details

  • Platform: Windows only (SpoutGL requires Windows)
  • Dependencies: SpoutGL and pyopengl (optional, gracefully handled if missing)
  • Frame Format: RGB/RGBA uint8 arrays [0, 255] or float [0, 1]
  • Threading: Spout input runs in a dedicated daemon thread
  • Error Handling: Comprehensive error handling with fallback behavior on non-Windows platforms

Usage

Receiving from Spout (Input)

  1. Select "Spout" as the input mode
  2. Optionally specify a sender name (leave empty to connect to any active sender)
  3. Start streaming - frames will be received from the Spout sender

Sending to Spout (Output)

  1. Open Settings panel
  2. Enable "Spout Output" toggle
  3. Optionally customize the sender name (default: "Scope")
  4. Processed frames will be available to Spout receivers
Screen.Recording.2025-12-02.200831_10mb.mp4

@BuffMcBigHuge BuffMcBigHuge marked this pull request as draft December 3, 2025 01:29
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
… API, framerates.

Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
try:
# Convert torch tensor to numpy if needed
if isinstance(frame, torch.Tensor):
frame = frame.detach().cpu().numpy()
Copy link
Contributor

@yondonfu yondonfu Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My best guess on what is causing the perf hit/slowdown is that it is this movement of each frame to host CPU mem. SpoutGL might ultimately convert the numpy array into a texture in GPU mem that is shared to the receiver (I haven't looked into what it does under the hood yet), but if we are doing a CPU mem transfer here I suspect that would cause non-negligible overhead as that is typically slow especially in the scenario where we have many frames per second.

Ideally, we'd be able to keep the tensor (frame) in GPU mem (CUDA) the entire time. With SpoutGL, perhaps what we need is to convert that frame into a OpenGL texture while keeping it in GPU mem the entire time. I searched around and while it doesn't seem trivial it seems doable. A couple links I encountered that seem relevant:

https://gist.github.com/victor-shepardson/5b3d3087dc2b4817b9bffdb8e87a57c4

https://dev-discuss.pytorch.org/t/opengl-interoperability/2696

https://documen.tician.de/pycuda/gl.html

But...

Should first confirm that this is actually the cause by profiling how much time is getting eaten up by the steps taken her eg transfer to CPU mem + the send call that takes the numpy array.

After a second thought, it does seem a bit weird that the memory transfer on its own cause such a noticeable perf hit (as shown in the video attached in my other comment) because the WebRTC flow also involves a GPU -> CPU mem transfer in order for the frames to be encoded into a video stream. And the WebRTC video is much smoother than the Spout output in TD right now. So, the memory transfer might still be slowing things down, but there might be a bigger issue somewhere else that is the primary cause of the Spout output in TD being more choppy.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great find - slipped over this!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah so there mem transfer is still incurring overhead, but I believe the real cause of the perf hit is described in this comment.

@yondonfu
Copy link
Contributor

yondonfu commented Dec 4, 2025

2025-12-03.20-48-35.mp4

Noting that there is def a perf hit right now when sending via Spout and receiving in TD - see the above video. And see my other comment on my guess as to where the hit may be happening.

Signed-off-by: BuffMcBigHuge <marco@bymar.co>
@BuffMcBigHuge BuffMcBigHuge marked this pull request as ready for review December 8, 2025 23:06
Copy link
Contributor

@yondonfu yondonfu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking good!

I mentioned this in a comment, but generally WDYT about using the Spout Sender/Receiver naming convention everywhere instead of Spout Input/Output since the former seems to be the convention in other apps too so we could follow that in the UI as well as in var naming in the code to match?

Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
…d default output name, added Windows detection in UI.

Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Signed-off-by: BuffMcBigHuge <marco@bymar.co>
Copy link
Contributor

@yondonfu yondonfu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are still some visual changes that didn't make it in, but LGTM for now and going to fixup on main.

@yondonfu yondonfu merged commit df51968 into main Dec 12, 2025
5 checks passed
@yondonfu yondonfu deleted the marco/feat/spout branch December 12, 2025 15:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants