Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nodes: Add GTAONode. #28844

Merged
merged 23 commits into from
Jul 11, 2024
Merged

Nodes: Add GTAONode. #28844

merged 23 commits into from
Jul 11, 2024

Conversation

Mugen87
Copy link
Collaborator

@Mugen87 Mugen87 commented Jul 10, 2024

Related issue: -

Description

This PR adds GTAONode so we can use SSAO in post processing.

Copy link

github-actions bot commented Jul 10, 2024

📦 Bundle size

Full ESM build, minified and gzipped.

Filesize dev Filesize PR Diff
683.5 kB (169.2 kB) 683.5 kB (169.2 kB) +0 B

🌳 Bundle size after tree-shaking

Minimal build including a renderer, camera, empty scene, and dependencies.

Filesize dev Filesize PR Diff
460.7 kB (111.1 kB) 460.7 kB (111.1 kB) +0 B

@sunag
Copy link
Collaborator

sunag commented Jul 10, 2024

I found a way to us advance for now 566e560

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 10, 2024

It seems the basic output of getViewPosition() is not correct. If you visualize the result via return vec4( viewPosition, 1.0 ); and compare it to the original version, they look different.

GTAONode:

image

GTAOPass

image

I'm not sure yet about the root cause but it could be related to differences in the projection matrix and its inverse or depth values and their range.

} ) );
postProcessing.outputColorTransform = false;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this just temporarily to better visually debug the shader.

Copy link
Collaborator

@sunag sunag Jul 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you could try get positionView from MRT too? like:

scenePass.setMRT( mrt( {
	output: output,
	normal: transformedNormalWorld,
	view: positionView
} ) );

const scenePassView = scenePass.getTextureNode( 'view' );

We could find ways to optimize them too...

Copy link
Collaborator

@sunag sunag Jul 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use cameraProjectionMatrixInverse inside a QuadMesh.render()( PostProcessing ) it will use the current camera OrthographicCamera and not the camera used in pass().

Copy link
Collaborator Author

@Mugen87 Mugen87 Jul 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, using positionView and the correct camera fixes the image!

I want to understand why the inline computation of positionView fails. I think this is related to the different clip space of WebGPU but I'm still not sure.

@sunag
Copy link
Collaborator

sunag commented Jul 11, 2024

@Mugen87 I forked your example to create a simplified version using SSAO, I'm having some positive results, although I think it may take days to finish.

It seemed to me that const aoPass = scenePass.ao(); it's better, ao( scenePass )

image

@sunag
Copy link
Collaborator

sunag commented Jul 11, 2024

About getViewPosition it's related with NDC, Z-range in WebGPU is [ 0, 1 ] and in WebGL it's [ -1, 1 ], depth..mul( 2.0 ).sub( 1.0 ) make sense for WebGL NDC but not for WebGPU. I will add some TSL function for this.

const getViewPosition = tslFn( ( [ screenPosition, depth ] ) => {

	screenPosition = vec2( screenPosition.x, screenPosition.y.oneMinus() ).mul( 2.0 ).sub( 1.0 );

	//const clipSpacePosition = vec4( vec3( screenPosition, depth.mul( 2.0 ).sub( 1.0 ) ), 1.0 ); // webgl
	const clipSpacePosition = vec4( vec3( screenPosition, depth ), 1.0 ); // webgpu
	const viewSpacePosition = vec4( this.cameraProjectionMatrixInverse.mul( clipSpacePosition ) );

	return viewSpacePosition.xyz.div( viewSpacePosition.w );

} );

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

I forked your example to create a simplified version using SSAO, I'm having some positive results, although I think it may take days to finish.

I was hoping we could add just a single SSAO effect based on GTAO so we have less things to maintain compared to EffectComposer where we have GTAO, SAO and SSAO.

However, I'm also okay if we start with a simple SSAO and then enhance the implementation to GTAO. We also need a port of PoissonDenoiseShader otherwise the AO from a raw SSAO looks too noisy. A simple blur could help at the beginning as well.

Sidenote: I would favor using getViewPosition() in this case instead of a separate output attachment since we should find a good balance between the number of MRT outputs and inline computations. If we save too much data in textures, effects get too bandwidth-heavy which slows them down especially on mobile devices (see #28749 (comment)).

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

I will add some TSL function for this.

That would be great. Other effect will benefit from such a helper.

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

So close now...but I think there is still an issue in getSceneUvAndDepth(). It does the reverse operation of getViewPosition(). Instead of getting the view position based on uv and depth, it computes for a given view position the uv and depth. I guess it requires a similar update as getViewPosition().

@sunag
Copy link
Collaborator

sunag commented Jul 11, 2024

 I would favor using getViewPosition() in this case instead of a separate output attachment since we should find a good balance between the number of MRT

I agree, we could reduce to 8 bit and use directionToColor like webgpu_mrt example too.

But what's intriguing me about this PR is that making AO applied to beutiy doesn't seem correct to me, that wouldn't deal with emissive and transparent objects in the best way. My main incentive was to study a way to improve this part.

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

This is somewhat related: #27475

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

I'm afraid I'm doing something wrong with loop(). When I transform the following short main() from GLSL to TSL, I get a different color output.
GLSL

void main() {
	float depth = getDepth(vUv.xy);
	if (depth >= 1.0) {
		discard;
		return;
	}

	float ao = 0.0;
	for (int i = 0; i < 3; ++i) {
		
		float angle = float(i) / float(3) * PI;
		ao += cos(angle);
	}

	ao = clamp(ao / float(3), 0., 1.);

	gl_FragColor = vec4(vec3(ao), 1.0);
}

TSL

const ao = tslFn( () => {

	const depth = sampleDepth( uvNode );

	depth.greaterThanEqual( 1.0 ).discard();

	let ao = float( 0 ).toVar();

	loop( { start: int( 0 ), end: int( 3 ), type: 'int', condition: '<' }, ( { i } ) => {

		const angle = float( i ).div( float( 3 ) ).mul( PI );
		ao.addAssign( cos( angle ) );

	} );

	ao = clamp( ao.div( 3 ), 0, 1 );

	return vec4( vec3( ao ), 1.0 );

} );

The loops should compute an angle which is the basis for computing AO samples. However, the angle values differ between both programs which explains some visual differences in the final shaders.

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

I have temporarily added 724117d so it's possible to compare webgpu_postprocessing_ao and webgl_postprocessing_gtao.

The house should have the same grayscale but the TSL based version is darker.

@@ -0,0 +1,268 @@
import TempNode from '../core/TempNode.js';
import { texture } from '../accessors/TextureNode.js';
import { textureSize } from '../accessors/TextureSizeNode.js';

Check notice

Code scanning / CodeQL

Unused variable, import, function or class Note

Unused import textureSize.
import { texture } from '../accessors/TextureNode.js';
import { textureSize } from '../accessors/TextureSizeNode.js';
import { uv } from '../accessors/UVNode.js';
import { addNodeElement, nodeObject, tslFn, mat3, vec2, vec3, vec4, float, int, If } from '../shadernode/ShaderNode.js';

Check notice

Code scanning / CodeQL

Unused variable, import, function or class Note

Unused imports If, mat3.
import { DataTexture } from '../../textures/DataTexture.js';
import { Vector2 } from '../../math/Vector2.js';
import { Vector3 } from '../../math/Vector3.js';
import { PI, cos, sin, pow, clamp, abs, max, mix, sqrt, acos, dot, normalize, cross } from '../math/MathNode.js';

Check notice

Code scanning / CodeQL

Unused variable, import, function or class Note

Unused imports abs, acos, cross, dot, max, mix, normalize, pow, sin, sqrt.
import { Vector2 } from '../../math/Vector2.js';
import { Vector3 } from '../../math/Vector3.js';
import { PI, cos, sin, pow, clamp, abs, max, mix, sqrt, acos, dot, normalize, cross } from '../math/MathNode.js';
import { div, mul, add, sub } from '../math/OperatorNode.js';

Check notice

Code scanning / CodeQL

Unused variable, import, function or class Note

Unused imports add, div, mul, sub.

// const sampleTexture = ( uv ) => textureNode.uv( uv );
const sampleDepth = ( uv ) => this.depthNode.uv( uv ).x;
const sampleNoise = ( uv ) => this.noiseNode.uv( uv );

Check notice

Code scanning / CodeQL

Unused variable, import, function or class Note

Unused variable sampleNoise.
const sampleDepth = ( uv ) => this.depthNode.uv( uv ).x;
const sampleNoise = ( uv ) => this.noiseNode.uv( uv );

const getSceneUvAndDepth = tslFn( ( [ sampleViewPos ] )=> {

Check notice

Code scanning / CodeQL

Unused variable, import, function or class Note

Unused variable getSceneUvAndDepth.

} );

const getViewPosition = tslFn( ( [ screenPosition, depth ] ) => {

Check notice

Code scanning / CodeQL

Unused variable, import, function or class Note

Unused variable getViewPosition.
@sunag
Copy link
Collaborator

sunag commented Jul 11, 2024

@Mugen87 Try ao.assign( clamp( ao.div( 3 ), 0, 1 ) ); instead of ao = clamp( ao.div( 3 ), 0, 1 );

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

Wow, that made the difference! Thanks for fixing!

TBH, it is a bit of unexpected that the previous code didn't work 😇 . Can you explain why it isn't working?

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

Finally 🎉
image

@Mugen87 Mugen87 marked this pull request as ready for review July 11, 2024 16:45

// y

const sampleSceneUvDepthY = getSceneUvAndDepth( viewPosition.sub( sampleViewOffset ) );
Copy link
Collaborator Author

@Mugen87 Mugen87 Jul 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know why but I had to give the variables sampleSceneUvDepthY , sampleSceneViewPositionY and viewDeltaY unique names otherwise the ao was incorrect (it took a while to figure that out^^).

@Mugen87 Mugen87 added this to the r167 milestone Jul 11, 2024
@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

I'll implement the AO blend in a different PR and the denoise effect with another one.

@Mugen87 Mugen87 merged commit 3230889 into mrdoob:dev Jul 11, 2024
10 of 11 checks passed
@sunag
Copy link
Collaborator

sunag commented Jul 11, 2024

Amazing! <3

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Jul 11, 2024

Couldn't make it work without your help! 🙌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants