Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First class 2D rendering in re_renderer instead of depth offsetting #647

Closed
Wumpf opened this issue Dec 26, 2022 · 4 comments · Fixed by #680
Closed

First class 2D rendering in re_renderer instead of depth offsetting #647

Wumpf opened this issue Dec 26, 2022 · 4 comments · Fixed by #680
Labels
enhancement New feature or request 🔺 re_renderer affects re_renderer itself

Comments

@Wumpf
Copy link
Member

Wumpf commented Dec 26, 2022

Currently, every time we render anything 2D (that includes 2D things projected into 3D!) we use a depth offset.
This has a lot of drawbacks:

  • z fighting if depth offset not too big
  • weird layering artifacts if depth offset too big (extreme when there is a lot of objects layered, e.g. history on images!)
  • interferes with position, therefore also picking and other things
  • ...

We need to have a first class "this is 2D (for a given 2D surface)" with an (integer?) depth offset.
Objects that are 2D need to be rendered in a special way - there is a different ideas around on how to do this, most of them involving rendering 2D objects separately.

My personal favorite so far is to not separate render passes for 2D but instead:

  • pass all positions and matrices as normal, no z offsets (i.e. massive z fighting if nothing is done)
  • every time we pass a world matrix (-> point/line batches, mesh instances), we additionally pass an integer z-index (or “no z index”)
  • in vertex shader then use the z index to determine a as small as possible depth offset
    • this requires us to abandon 24Plus depth format and instead be very deliberate about how depth values are calculated and stored. We want to apply the depth offset as late as possible, i.e. directly on the vec4 position output of a vertex shader
      The depth offsets would be managed more or less directly by the scene loading directly - we may need some awareness of which surface an offset goes to, but given that the offsets will be as small as possible on a high precision depth buffer it may not play a big role if we offset by some factor below 1000 (citation needed ;))

See also recent Slack discussion
https://rerunio.slack.com/archives/C02UN79KGMU/p1671635380178349

@Wumpf Wumpf added enhancement New feature or request 🔺 re_renderer affects re_renderer itself labels Dec 26, 2022
@Wumpf
Copy link
Member Author

Wumpf commented Dec 26, 2022

Since we're by now we're doing several draw calls for points & lines (in "line/point batches"), we probably can use a depth bias in all current usecases, just as @jondo2010 suggested not too long ago
I.e. we don't need to code it up manually in the shader (which should be equivalent) and more importantly, the minimum depth step is handled by the api

@Wumpf
Copy link
Member Author

Wumpf commented Dec 27, 2022

... I forgot one of the arguments against using WebGPU sided Depth Bias I brought up myself: That would require a new RenderPipeline for every possible depth bias which is too costly

@nikolausWest
Copy link
Member

@Wumpf: does 680 close this?

@Wumpf
Copy link
Member Author

Wumpf commented Jan 5, 2023

Yes. It's marked in #680 as such ans will auto-close with the pr :)

@Wumpf Wumpf closed this as completed in #680 Jan 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request 🔺 re_renderer affects re_renderer itself
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants