You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, every time we render anything 2D (that includes 2D things projected into 3D!) we use a depth offset.
This has a lot of drawbacks:
z fighting if depth offset not too big
weird layering artifacts if depth offset too big (extreme when there is a lot of objects layered, e.g. history on images!)
interferes with position, therefore also picking and other things
...
We need to have a first class "this is 2D (for a given 2D surface)" with an (integer?) depth offset.
Objects that are 2D need to be rendered in a special way - there is a different ideas around on how to do this, most of them involving rendering 2D objects separately.
My personal favorite so far is to not separate render passes for 2D but instead:
pass all positions and matrices as normal, no z offsets (i.e. massive z fighting if nothing is done)
every time we pass a world matrix (-> point/line batches, mesh instances), we additionally pass an integer z-index (or “no z index”)
in vertex shader then use the z index to determine a as small as possible depth offset
this requires us to abandon 24Plus depth format and instead be very deliberate about how depth values are calculated and stored. We want to apply the depth offset as late as possible, i.e. directly on the vec4 position output of a vertex shader
The depth offsets would be managed more or less directly by the scene loading directly - we may need some awareness of which surface an offset goes to, but given that the offsets will be as small as possible on a high precision depth buffer it may not play a big role if we offset by some factor below 1000 (citation needed ;))
Since we're by now we're doing several draw calls for points & lines (in "line/point batches"), we probably can use a depth bias in all current usecases, just as @jondo2010 suggested not too long ago
I.e. we don't need to code it up manually in the shader (which should be equivalent) and more importantly, the minimum depth step is handled by the api
... I forgot one of the arguments against using WebGPU sided Depth Bias I brought up myself: That would require a new RenderPipeline for every possible depth bias which is too costly
Currently, every time we render anything 2D (that includes 2D things projected into 3D!) we use a depth offset.
This has a lot of drawbacks:
We need to have a first class "this is 2D (for a given 2D surface)" with an (integer?) depth offset.
Objects that are 2D need to be rendered in a special way - there is a different ideas around on how to do this, most of them involving rendering 2D objects separately.
My personal favorite so far is to not separate render passes for 2D but instead:
24Plus
depth format and instead be very deliberate about how depth values are calculated and stored. We want to apply the depth offset as late as possible, i.e. directly on the vec4 position output of a vertex shaderThe depth offsets would be managed more or less directly by the scene loading directly - we may need some awareness of which surface an offset goes to, but given that the offsets will be as small as possible on a high precision depth buffer it may not play a big role if we offset by some factor below 1000 (citation needed ;))
See also recent Slack discussion
https://rerunio.slack.com/archives/C02UN79KGMU/p1671635380178349
The text was updated successfully, but these errors were encountered: