-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interoperability between WebGL and WebGPU within WebXR? #7
Comments
This is a good point, and something that I think we've only touched on briefly in past discussions IIRC. +@cabanier, as I would love to get his thoughts on this too! From the point of view of OpenXR, which pretty much all of the non-Apple devices will be using, there is a (very reasonable) requirement that only a single graphics API be used at a time. See the "Session Creation" section of the spec.
(Emphasis mine) In an ideal world it would be nice to be able to intermix WebGL and WebGPU layers for the sake of library composability. ie: The main scene is rendered with WebGPU but there's a window displayed as an Because of this, at minimum we would need to validate that all the layers passed to the session are from the same API. As mentioned, though, that would allow developers to still thrash between APIs from frame to frame, which sounds like a terrible idea even if we can facilitate it. A mode switch, like @mwyrzykowski suggested, would maybe be better, but for an OpenXR-based backend that was changing the graphics binding it used based on the JS graphics API in use that would involve tearing down and restarting the entire session. Probably too disruptive. So it's probably best if we indicate the API that will be used at session creation time. I can think of a couple of ways to do this. One is that we could have a "webgpu" feature. It's not clear to me, though, if that would be a mutually exclusive feature with "layers" or require "layers" but forbid part of it's use? That could get pretty messy. It may be simpler to add a new enum to the Having this set at session creation time also would solve a related problem I've been wondering about during my prototyping, which is that WebGPU projection matrices have a [0, 1] depth range, whereas WebGL projection matrices have a [-1, 1]. This would allow sessions to return the right depth range for the API without developer intervention, which is nice. |
/tpac I'm going to tentatively mark this for TPAC unless you want to talk about it sooner (in 2 weeks) |
Sounds great to me, that would be my preference as well. I am also fine discussing at TPAC |
After giving it some additional thought, I'm going to advocate for using a feature string to perform the WebGL/WebGPU mode switch. So: xrSession = await navigator.xr.requestSession('immersive-vr', {
requiredFeatures: ['webgpu'] // Exact string subject to discussion
}); This likely provides a better developer experience, because existing WebXR implementations will actively reject that session if WebGPU integration isn't supported. (The spec says that any unrecognized This also allows developers to use |
I could live with this but would prefer a new API (ie Should we also add an enum attribute to indicate to gl interface type to the xrsession or create a whole new interface? Both approaches will require substantial changes to the WebXR and WebXR layers specs. A new interface might be more work but would result in a cleaner (or potentially a whole new) spec. |
By a whole new interface do you mean introducing something like an A new session request API ( As for changes to the WebXR and Layers spec, I don't think it'll be too bad? We would simply indicate (probably in the WebGPU bindings module spec itself) that the constructors for |
It would just be to keep the spec clean and not having to add
OK. We can always iterate on this later.
True. Will the |
Hi, currently the explainer does not seem to prevent using WebGL one frame and WebGPU the next. However, this is problematic because WebGL's coordinate system is inverted and I'm not sure that all backends which implement WebGL (via OpenGL) can share textures to WebGPU, if they are implemented for instance with Vulkan. It is additionally problematic because the shader modules might already be compiled with WebGPU prior to entering the immersive session and now they have to be potentially, partially recompiled to handle the inverted coordinate system
I would like to know if the backend should be specified in the XRSession prior to entering the immersive session? E.g., something like:
where an exception is thrown if WebGPU is not supported, otherwise, the XRSession uses WebGPU's coordinate system. We could alternatively name it
setPreferredBackend(name)
wherename is one of "webgl", "webgpu"
or any number of similar methods.And similarly, creating an
XRGPUBinding
from anXRSession
which does not call this method would be an error.If we do not have something like this, then the WGSL shaders which pass vertical positions will be inverted in WebXR's default coordinate system which matches WebGL. I haven't fully considered if an implementation can invert that without significant performance cost or runtime shader recompilation.
cc @toji as the author of the document.
During our effort of implementing the explainer, this is the one issue I came across. I only tried it with a very simple head tracked triangle test page, but seems great 👍. Initially the triangle was upside down, because I used the same shader from my hello triangle WebGPU page without accounting for the y-axis inversion.
The text was updated successfully, but these errors were encountered: