Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scene shader support for screenspace textures #62130

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

lyuma
Copy link
Contributor

@lyuma lyuma commented Jun 17, 2022

Allows DEPTH_TEXTURE and SCREEN_TEXTURE to work for multiview from user-written shaders.

Introduces a new type texture2DScreen which automates passing the ViewIndex. There is currently no way for users to pass in these textures, but this type is used for the two internal textures. In the future, there may be reasons to have multiview viewport textures, which would then use this type if passed in.

@BastiaanOlij
Copy link
Contributor

I like this approach though I'm not super fond of introducing the need to differentiate between normal texture and screen texture by the end user when writing custom shader code. I'm not sure how but it should be possible to make this more internal. The only textures for which this applies are those that we add as inputs. We shouldn't have any user defined uniforms for which we need to do this, if a stereo result is input this can be handled as a normal texture array and our view index can simply be used

@lyuma
Copy link
Contributor Author

lyuma commented Jun 24, 2022

Yeah, I understand your concerns. The type isn't something the user can declare as a uniform, and we could entirely avoid exposing it to the user (you wouldn't be passing SCREEN_TEXTURE or DEPTH_TEXTURE to functions anyway)

The way I see it, we need to pick: either pretend SCREEN_TEXTURE is sampler2D, or pretend it is sampler2DArray, or pretend it is a custom type, as in this PR. (note that we will need to special-case these two variables in the shader compiler to call the correct versions of the functions)

If we pretend it's sampler2D, then for the array version, we need to make our own implementation of textureProj since it doesn't exist for arrays. It also means that you will be unable to sample the other views, but generally speaking, you don't want this anyway.

If we pretend they are sampler2DArray, this will change the API so the user is forced to pass VIEW_INDEX. I fear this approach because users might pass 0 without thinking, and this will cause lots of broken water shaders in stereo... If we provide helper functions that take vec2 arguments for sampler2DArray, that could lead to bugs where VIEW_INDEX will be passed in as the slice index to ordinary sampler2DArray.

Finally, there is the approach I arrived on in this PR, which uses some macros to pretend it is a custom type, but due to GLSL rules about calling the sampler constructor, it became a bit hacky and I had to pass them as separate arguments.

As for "We shouldn't have any user defined uniforms for which we need to do this", that is currently true, but there might be a time in the future where viewports could be captured to a XR viewport (it will sometimes be a texture2D, and sometimes a texture2DArray) ... for example, planar mirrors are sometimes implemented this way. Given that this is not the direction Godot is taking, and that we don't currently have this functionality, it could always be implemented later if necessary.

In summary, if the custom type is not preferred, I think we can pretend it is a sampler2D, automatically fill in VIEW_INDEX, and implement textureProj(texture2DArray, sampler, uv), which would call texture(vec3(uv.xy / uv.w, VIEW_INDEX))

@lyuma
Copy link
Contributor Author

lyuma commented Jun 24, 2022

I also wanted to mention that even if we avoid using a fake custom type, it will still require the texture and sampler to be passed as separate arguments, and the macro and extra overloads may still be needed. So pretending it is a sampler2D will likely not make the PR cleaner.

The issue is that in the Godot shader code, texture(SCREEN_TEXTURE, uv) might be hypothertically expanded to

texture(sampler2DArray(screen_texture, material_samplers[SAMPLER_LINEAR_WITH_MIPMAPS_CLAMP]), uv)

and we made an overload (which worked in the copy shader), such as:

vec4 texture(sampler2DArray samp, vec2 uv) {
    return texture(samp, vec3(uv, ViewIndex));
}

this would have worked fine if we used conventional samplers, or even images in a compute shader.

However, in our system, this will actually not compile, because we use texture objects, and as exampled briefly in GL_KHR_vulkan_glsl,

Constructors can then be used to combine a sampler and a texture at the
point of making a texture lookup call:

    texture(sampler2D(t, s), ...);

GLSLang is extremely strict on this point, and it must be only converted to sampler2D at the point of the builtin texture function call.

So that sadly means we need to pass t and s in as two arguments, and so that's where we either need a sort of faux custom type + macro like I did, or the shader compiler needs to special case SCREEN_TEXTURE and pass in the texture and sampler separately. Hope this explains some of the nuance in the approach.

@reduz
Copy link
Member

reduz commented Aug 7, 2022

We discussed this with @clayjohn and believe there is a better approach to it, which is to simply add a hint that the texture is a screen texture, which is much simpler than adding a new type.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants