-
-
Notifications
You must be signed in to change notification settings - Fork 35.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nodes: Add GTAONode
.
#28844
Nodes: Add GTAONode
.
#28844
Conversation
📦 Bundle sizeFull ESM build, minified and gzipped.
🌳 Bundle size after tree-shakingMinimal build including a renderer, camera, empty scene, and dependencies.
|
I found a way to us advance for now 566e560 |
} ) ); | ||
postProcessing.outputColorTransform = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding this just temporarily to better visually debug the shader.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you could try get positionView
from MRT too? like:
scenePass.setMRT( mrt( {
output: output,
normal: transformedNormalWorld,
view: positionView
} ) );
const scenePassView = scenePass.getTextureNode( 'view' );
We could find ways to optimize them too...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use cameraProjectionMatrixInverse
inside a QuadMesh.render()
( PostProcessing ) it will use the current camera OrthographicCamera
and not the camera used in pass()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, using positionView
and the correct camera fixes the image!
I want to understand why the inline computation of positionView
fails. I think this is related to the different clip space of WebGPU but I'm still not sure.
@Mugen87 I forked your example to create a simplified version using It seemed to me that |
About const getViewPosition = tslFn( ( [ screenPosition, depth ] ) => {
screenPosition = vec2( screenPosition.x, screenPosition.y.oneMinus() ).mul( 2.0 ).sub( 1.0 );
//const clipSpacePosition = vec4( vec3( screenPosition, depth.mul( 2.0 ).sub( 1.0 ) ), 1.0 ); // webgl
const clipSpacePosition = vec4( vec3( screenPosition, depth ), 1.0 ); // webgpu
const viewSpacePosition = vec4( this.cameraProjectionMatrixInverse.mul( clipSpacePosition ) );
return viewSpacePosition.xyz.div( viewSpacePosition.w );
} ); |
I was hoping we could add just a single SSAO effect based on GTAO so we have less things to maintain compared to However, I'm also okay if we start with a simple SSAO and then enhance the implementation to GTAO. We also need a port of Sidenote: I would favor using |
That would be great. Other effect will benefit from such a helper. |
So close now...but I think there is still an issue in |
I agree, we could reduce to 8 bit and use directionToColor like webgpu_mrt example too. But what's intriguing me about this PR is that making AO applied to beutiy doesn't seem correct to me, that wouldn't deal with emissive and transparent objects in the best way. My main incentive was to study a way to improve this part. |
This is somewhat related: #27475 |
I'm afraid I'm doing something wrong with
TSL const ao = tslFn( () => {
const depth = sampleDepth( uvNode );
depth.greaterThanEqual( 1.0 ).discard();
let ao = float( 0 ).toVar();
loop( { start: int( 0 ), end: int( 3 ), type: 'int', condition: '<' }, ( { i } ) => {
const angle = float( i ).div( float( 3 ) ).mul( PI );
ao.addAssign( cos( angle ) );
} );
ao = clamp( ao.div( 3 ), 0, 1 );
return vec4( vec3( ao ), 1.0 );
} ); The loops should compute an angle which is the basis for computing AO samples. However, the angle values differ between both programs which explains some visual differences in the final shaders. |
I have temporarily added 724117d so it's possible to compare The house should have the same grayscale but the TSL based version is darker. |
@@ -0,0 +1,268 @@ | |||
import TempNode from '../core/TempNode.js'; | |||
import { texture } from '../accessors/TextureNode.js'; | |||
import { textureSize } from '../accessors/TextureSizeNode.js'; |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
import { texture } from '../accessors/TextureNode.js'; | ||
import { textureSize } from '../accessors/TextureSizeNode.js'; | ||
import { uv } from '../accessors/UVNode.js'; | ||
import { addNodeElement, nodeObject, tslFn, mat3, vec2, vec3, vec4, float, int, If } from '../shadernode/ShaderNode.js'; |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
import { DataTexture } from '../../textures/DataTexture.js'; | ||
import { Vector2 } from '../../math/Vector2.js'; | ||
import { Vector3 } from '../../math/Vector3.js'; | ||
import { PI, cos, sin, pow, clamp, abs, max, mix, sqrt, acos, dot, normalize, cross } from '../math/MathNode.js'; |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
import { Vector2 } from '../../math/Vector2.js'; | ||
import { Vector3 } from '../../math/Vector3.js'; | ||
import { PI, cos, sin, pow, clamp, abs, max, mix, sqrt, acos, dot, normalize, cross } from '../math/MathNode.js'; | ||
import { div, mul, add, sub } from '../math/OperatorNode.js'; |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
|
||
// const sampleTexture = ( uv ) => textureNode.uv( uv ); | ||
const sampleDepth = ( uv ) => this.depthNode.uv( uv ).x; | ||
const sampleNoise = ( uv ) => this.noiseNode.uv( uv ); |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
const sampleDepth = ( uv ) => this.depthNode.uv( uv ).x; | ||
const sampleNoise = ( uv ) => this.noiseNode.uv( uv ); | ||
|
||
const getSceneUvAndDepth = tslFn( ( [ sampleViewPos ] )=> { |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
|
||
} ); | ||
|
||
const getViewPosition = tslFn( ( [ screenPosition, depth ] ) => { |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
@Mugen87 Try |
Wow, that made the difference! Thanks for fixing! TBH, it is a bit of unexpected that the previous code didn't work 😇 . Can you explain why it isn't working? |
|
||
// y | ||
|
||
const sampleSceneUvDepthY = getSceneUvAndDepth( viewPosition.sub( sampleViewOffset ) ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know why but I had to give the variables sampleSceneUvDepthY
, sampleSceneViewPositionY
and viewDeltaY
unique names otherwise the ao was incorrect (it took a while to figure that out^^).
I'll implement the AO blend in a different PR and the denoise effect with another one. |
Amazing! <3 |
Couldn't make it work without your help! 🙌 |
Related issue: -
Description
This PR adds
GTAONode
so we can use SSAO in post processing.