Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Post-processing, screen-reading shaders create black rectangles in VR #77060

Open
Ashtreighlia opened this issue May 14, 2023 · 3 comments
Open

Comments

@Ashtreighlia
Copy link

Ashtreighlia commented May 14, 2023

Godot version

4.0.2-stable

System information

Windows 11, Intel Core i7-12650H, Nvidia RTX3070 - Driver 531.41 - Vulkan

Issue description

I'm experiencing issues when applying a custom post-processing shader in VR. Specifically, I'm trying to use a nearest-neighbor downscaler for a pixelated look, but when applied to the XR camera, a black rectangle appears in the top left corner of the "screen," the size of which depends on the size of the former game window. I'm not sure if this is an expected behavior or if I'm missing something.

Expected Behaviour (rendered via standard 3D camera)

The nearest-neighbor downscaler should be applied to the screen without any additional artifacts or distortions.
3D_processed

Actual Behaviour (rendered via XR camera)

A black rectangle appears in the top left corner of the screen.
VR_processed

Godot forum

I've gone ahead and created a thread on the Godot forum, but haven't received any pointers to this issue. Leads me to believe this is worthy enough for a Github issue ^^

Steps to reproduce

  • Apply a custom post-processing shader (code provided below) to an XR camera in a VR environment by the use of a CanvasScreen and ColorRect.
  • Observe the black rectangle that appears in the top left corner of the screen when launched in VR.

Scene setup

Screenshot 2023-05-07 201436

Settings for the ColorRect

Screenshot 2023-05-07 201459

Screen-reading shader code

uniform sampler2D SCREEN_TEXTURE : hint_screen_texture, repeat_disable, filter_nearest;
void fragment() {
        vec2 screen_size = 1.0/SCREEN_PIXEL_SIZE;
        // converts UV-coordinates (0-1) to pixel-coordinates (e.g. 0-1920, 0-1080), scales them down by 25%
        // afterwards those "stepped" coordinates are converted back to uv-coordinates
        vec2 uv = vec2(floor((SCREEN_UV.x*screen_size.x)/4.0)/screen_size.x, 
                floor((SCREEN_UV.y*screen_size.y)/4.0)/screen_size.y)*4.0;
        // takes one square of 16x16px and paints it with the mean colour.
        vec3 col = texture(SCREEN_TEXTURE, uv).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x	, 0.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(0.0					, SCREEN_PIXEL_SIZE.y)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x	, SCREEN_PIXEL_SIZE.y)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x*2.0, 0.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x*2.0, SCREEN_PIXEL_SIZE.y)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x*2.0, SCREEN_PIXEL_SIZE.y*2.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(0.0					, SCREEN_PIXEL_SIZE.x*2.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x	, SCREEN_PIXEL_SIZE.x*2.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x*3.0, 0.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x*3.0, SCREEN_PIXEL_SIZE.y)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x*3.0, SCREEN_PIXEL_SIZE.y*2.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(0.0					, SCREEN_PIXEL_SIZE.y*3.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x	, SCREEN_PIXEL_SIZE.y*3.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x*2.0, SCREEN_PIXEL_SIZE.y*3.0)).xyz*0.0625;
        col += texture(SCREEN_TEXTURE, uv + vec2(SCREEN_PIXEL_SIZE.x*3.0, SCREEN_PIXEL_SIZE.y*3.0)).xyz*0.0625;
    COLOR.xyz = col;
}

If any additional info is needed, I'm happy to supply more details.
Many thanks in advance! :3

Minimal reproduction project

N/A

@lyuma
Copy link
Contributor

lyuma commented May 15, 2023

Screen reading shaders are not currently supported in stereo/multiview in Godot 4.0.

See my PR #62130 for a possible implementation. the patch has some conflicts that would need to be resolved to work on the current build of Godot. Let me know if you'd like to try this approach but are blocked on the conflicts, and I can spend some time to update the PR this week.

My approach has not yet been reviewed/approved for Godot 4.x. My PR was done in this way because it was the simplest way to solve the problem and can be done today, but there are other questions about long-term maintainability which would be up to the rendering and xr team to decide how best to approach this problem.

Oh, also.... note that the way you're doing it with a canvas and ColorRect (a control node) won't work in the headset anyway. You cannot use Control nodes in VR, because they are rendered after the 3d scene is drawn. Instead, you should be using a Sprite3D or a Quad mesh positioned in 3D, possibly relative to the XRCamera3D so it's in front of you. So this bug report isn't exactly accurate.

Would it be possible to replicate the issue in the headset and verify it happens there? (Show that the problem happens in the headset mirror / Display VR View)

@Ashtreighlia
Copy link
Author

Thanks for the quick reply!

I've looked a bit into your pull request, but I haven't looked at Godot's source code enough to make much sense of it. ^^ Will try it out in the future, but haven't compiled something from source yet and don't have enough time to get into that as of now.

Regarding the CanvasLayer/ColorRect issue, I can't quite follow you there as what I'm trying to do is post-processing "after the fact", which would require the entire scene to be rasterized beforehand (if I'm not mistaken). I've gone ahead and looked a bit into Spirte3D in conjunction with a SubViewport (which I guess is what you're on about), but wouldn't that eliminate the stereoscopic effect or does that "2D sprite" get rendered for each eye individually?

What do you mean by "replicate the issue in the headset", "headset mirror / DisplayVRView"?

@BastiaanOlij
Copy link
Contributor

BastiaanOlij commented May 18, 2023

You can work around the limitation by using a spatial shader on a full screen quad. You can override the vertex shader to ensure it becomes full screen.

So create a MeshInstance as a child of the camera, move it slightly in front of the camera.
Create a QuadMesh, size it to 2 x 2 meters.
Create a shader material

Use the following template for your shader material:

shader_type spatial;
render_mode depth_test_disabled, skip_vertex_transform, unshaded, cull_disabled;

void vertex() {
     POSITION = vec4(VERTEX, 1.0);
}

void fragment() {
    // Implement your fragment shader as before
}

Note also that you can't rely on screen size, rendering in VR happens at a higher scaled resolution, this resolution is controlled by SteamVR. In 4.1 a new property is introduced that gives you more control over this resolution: #73558
You could (ab)use this to lower the resolution considerably and create the pixelated result with a bonus performance boost.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants