Skip to content

Conversation

@chenosaurus
Copy link

  • replace slow CPU conversion step to using a GPU shader

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces GPU-accelerated YUV to RGB video frame conversion for LiveKit's Unity SDK. The change replaces the previous CPU-based RGBA conversion with an optimized shader-based approach that processes YUV420 planar data directly on the GPU.

Key changes:

  • Added YUV to RGB conversion shader using BT.709 limited range color space conversion
  • Refactored VideoStream to support both GPU and CPU conversion paths with fallback mechanism
  • Changed output texture from Texture2D to RenderTexture to support GPU rendering pipeline

Reviewed Changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
Runtime/Shaders/YuvToRgb.shader New HLSL shader implementing BT.709 YUV to RGB color space conversion
Runtime/Shaders/YuvToRgb.shader.meta Unity metadata for shader asset
Runtime/Shaders.meta Unity metadata for Shaders folder
Runtime/Scripts/VideoStream.cs Refactored to use RenderTexture with GPU shader conversion and CPU fallback
LICENSE.txt.meta Unity metadata for license file

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@chenosaurus chenosaurus requested a review from Copilot November 7, 2025 23:25
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
Copy link

Copilot AI Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shader uses HLSLPROGRAM/ENDHLSL but includes "UnityCG.cginc", which is a CG/HLSL include file. While this may work in Unity, the include should typically be "UnityShaderVariables.cginc" or a pure HLSL include when using HLSLPROGRAM. Consider using CGPROGRAM/ENDCG instead of HLSLPROGRAM/ENDHLSL for better compatibility with UnityCG.cginc, or switch to proper HLSL includes if using HLSLPROGRAM.

Suggested change
#include "UnityCG.cginc"
#include "UnityShaderVariables.cginc"

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@ladvoc ladvoc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On-device testing was successful, just a few minor comments.

{
public delegate void FrameReceiveDelegate(VideoFrame frame);
public delegate void TextureReceiveDelegate(Texture2D tex2d);
public delegate void TextureReceiveDelegate(Texture tex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this constitutes a minor breaking API change—are we able to use Texture2D here? If not, we can note this in the release notes.

private VideoStreamInfo _info;
private bool _disposed = false;
private bool _dirty = false;
private bool _useGpuShader = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we should expose an API to control this setting?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe I should just remove this, not sure if we will encounter any use cases where the system doesn't have some sort of GPU.

return o;
}

float3 yuvToRgb709Limited(float y, float u, float v)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you should be able to use half, that will save and boost the performance more.

Assuming you are using the legacy rendering pipeline:

inline half3 YUV709Limited_to_RGB(half y, half u, half v)
{
half c = y - half(16.0/255.0);
half d = u - half(128.0/255.0);
half e = v - half(128.0/255.0);

half Y = half(1.16438356) * c;
half3 rgb;
rgb.r = Y + half(1.79274107) * e;
rgb.g = Y - half(0.21324861) * d - half(0.53290933) * e;
rgb.b = Y + half(2.11240179) * d;
return saturate(rgb);

}

half4 frag(v2f i):SV_Target
{
half y = tex2D(_TexY, i.uv).r;
half u = tex2D(_TexU, i.uv).r; // U,V textures are W/2 x H/2
half v = tex2D(_TexV, i.uv).r;
return half4(YUV709Limited_to_RGB(y,u,v), 1.0h);
}

chenosaurus and others added 3 commits November 19, 2025 09:18
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants