Skip to content

Add VRCompositor commands for WebVR #603

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 30, 2016
Merged

Conversation

MortimerGoro
Copy link
Contributor

@MortimerGoro MortimerGoro commented Nov 28, 2016

Some WebVR commands such as Vsync and SubmitFrame must be called in the WebGL render thread. This PR add traits required to run some render commands from the WebVR implementation. WebVR implementation and vendor VR SDK calls are decoupled using these traits and ipc-channel commands.


This change is Reviewable


// Optional trait object that allows to create a VRCompositor instance to handle WebVR commands.
// Some WebVR commands such as Vsync and SubmitFrame must be called in the WebGL render thread.
vr_compositor_creator: Arc<Mutex<Option<Box<VRCompositorCreator>>>>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This compositor creator is referenced from both the render backend and the renderer. And it looks like the only reason it's needed on the renderer is for set_vr_compositor_creator. Can we assume the compositor creator is known at the init time? In this case, we might be able to just send it over and store without all the Arc/Mutex wrappers.

Copy link
Contributor Author

@MortimerGoro MortimerGoro Nov 28, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only need access to the compositor creator from the render_backend. I could add an Option<Box> to the RendererOptions struct if that's not a problem. I followed other setter patterns like set_render_notifier, set_render_notifier and kept RendererOptions intact. VRCompositor creator is known at init time and won't change, so we can avoid the Arc/Mutex wrappers.

Let me know if you like the RenderOptions change and I'll update this PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain a bit more on what option you want to add to RenderOptions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The compositor creator instance, something like this:

pub struct RendererOptions {
    pub device_pixel_ratio: f32,
    pub resource_override_path: Option<PathBuf>,
    pub enable_aa: bool,
    pub enable_msaa: bool,
    pub enable_profiler: bool,
    pub debug: bool,
    pub enable_recording: bool,
    pub enable_scrollbars: bool,
    pub precache_shaders: bool,
    pub renderer_kind: RendererKind,
    pub enable_subpixel_aa: bool,
    pub vr_compositor_creator: Option<Box<VRCompositorCreator>>>
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After that, In the Renderer constructor send the instance to RenderBackend::new() and remove Arc/Mutex wrappers.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose that to be good, yeah. Interested in what @glennw thinks.

}
VRCompositorCommand::SubmitFrame(compositor_id, left_bounds, right_bounds) => {
if let Some(ref compositor) = self.vr_compositors.get(&compositor_id) {
let texture = self.resource_cache.get_webgl_texture(&ctx_id);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The whole VR logic seems to be mostly independent of the WR, except for this resource_cache access. Do the compositors and their compositor creator have to live within the WR backend/renderer at all? Perhaps, we could narrow down the interface between those and reduce the integration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the resource_cache access to get the current texture_id (which might change after a webgl resize) another requirement is that specified VRCompositor calls must be run in the same thread as the WebGL commands. There could be many different VRCompositor implementations for different VR vendors, and some of them require to be created/initialized in the render thread too (because they use the shared OpenGL texture and the active OpenGL context).

These functions must run in response to some ipc-channel messages from JavaScript and we have to minimize the render path to achieve WebVR latency requirements. There is no way in webrender to run generic closures because of serde-serialization and sandboxed processes, so I implemented it using the traits. I'm about to publish a blogpost with more info about the overall WebVR implementation architecture, I'll send you the link. If you have some ideas to narrow down this interface while keeping the latency requirements that would be great.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

specified VRCompositor calls must be run in the same thread as the WebGL commands

I understand this. But from WR's point of view, the user creates the Renderer, so the user is fully aware (and in control) of the thread that executes WebGL commands. Can't the user just manage the compositors (with their creator) then, only interacting with WR via the resource cache, if needed?

Disclaimer: I'm still learning the WR way of things, so I might just be missing the obvious answer.

Copy link
Contributor Author

@MortimerGoro MortimerGoro Nov 28, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand this. But from WR's point of view, the user creates the Renderer, so the user is fully aware (and in control) of the thread that executes WebGL commands.

The user creates the Renderer, but the Renderer internally creates the renderer_backend thread so the user has not really a direct access to that inner thread. At least I need a trait to run "hard code" because that thread can only can be accessed through predefined ipc-channel messages which have some limitations (can't send closures, complex objects, pointers, and more).

Can't the user just manage the compositors (with their creator) then, only interacting with WR via the resource cache, if needed?

Disclaimer: I'm still learning the WR way of things, so I might just be missing the obvious answer.

Yes, I understand what you mean. I could reduce the current VRCompositor interfaces to a single VRCompositorHandler trait, that uses bytes (vec<u8> based messages) to forward-serialize all possible messages without defining them in WR. It has the pros of a less decoupled interface but it's a more obscure API. If that's the style you want in WR I can update the PR. I'm also interested in what @glennw thinks about both options.

The trait I'm talking about would look like this:

// Single instance in render_backend
pub trait VRCompositorHandler {
    fn handle(&self, data: vec<u8>, sender:Option<IpcSender<vec<u8>>>);
}
// IPC-Channel message that uses the VRCompositorHandler instance
VRCompositorCommand(data: vec<u8>, sender:Option<IpcSender<vec<u8>>>)

Copy link
Member

@kvark kvark Nov 28, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The user creates the Renderer, but the Renderer internally creates the renderer_backend thread so the user has not really a direct access to that inner thread.

Yes, but why do you need this thread? VRCompositor calls must be done on the same thread that Renderer is, not RenderBackend.

I could reduce the current VRCompositor interfaces to a single VRCompositorHandler trait, that uses bytes (vec based messages) to forward-serialize all possible messages without defining them in WR

Oh, no, please no Vec<u8> messages if we have this option.

@bors-servo
Copy link
Contributor

☔ The latest upstream changes (presumably #595) made this pull request unmergeable. Please resolve the merge conflicts.

@glennw
Copy link
Member

glennw commented Nov 28, 2016

@MortimerGoro I'm a little confused about this too, with the same question @kvark had.

The GL calls get issued on the Render thread, the RenderBackend thread never deals directly with a GL context or issues GL commands. Does this PR assume that there is GL work done on the backend thread, or am I misunderstanding the review comments so far?

@MortimerGoro
Copy link
Contributor Author

Yes, but why do you need this thread? VRCompositor calls must be done on the same thread that Renderer is, not RenderBackend.

The GL calls get issued on the Render thread, the RenderBackend thread never deals directly with a GL context or issues GL commands. Does this PR assume that there is GL work done on the backend thread, or am I misunderstanding the review comments so far?

Because of these reasons:

  • VRCompositor calls must be totally synced with the WebGL calls. Renderbackend is the thread that runs WebGL commands now (https://github.com/servo/webrender/blob/master/webrender/src/render_backend.rs#L335). Running the functions in that thread provides the best latency to avoid motion sickness. Another requirement is that a JavaScript user can submit frames to the headset in the middle of a JavaScript animation call and then perform another WebGL commands that are not rendered to the headset on that frame. So synchronization with the WebGL calls is a must.

  • VR headsets have it's own Vsync mechanism that might conflict with the main renderer Vsync. The spec allows to both render to a headset (90fps) and render to desktop window (60 fps) at the same time. We can have a requestAnimationFrame for a headset and a requestAnimationFrame for the window. Doing both on the main render thread would be a problem.

Oh, no, please no Vec messages if we have this option.

@glennw
Copy link
Member

glennw commented Nov 28, 2016

@MortimerGoro You're quite right, my apologies - I thought the webgl commands were proxied to the renderer thread to be applied there.

It's certainly not ideal to have the webgl commands replayed in the backend thread, but I'm not sure what the best alternative would be right now, or if it has to be this way.

@emilio Do you recall why they are replayed in the backend thread?

@emilio
Copy link
Member

emilio commented Nov 28, 2016

@glennw I don't think there's a specific reason. Seemed easier to not pollute the main thread with WebGL state, plus the backend thread is the one that is directly aware of publishing a frame, so all the WebGL cleanup that needs to run before that was easier done from there.

We can move it to another thread if you wish, though it's going to be a fair amount of sync messaging for no good reason IMO.

@MortimerGoro
Copy link
Contributor Author

@glennw @emilio FYI when I saw the current implementation I thought It was done willingly that way for a better parallelism and as a sandboxed thread to avoid malicious input in the main thread (WebGL allows to execute arbitrary code on the GPU, forgetting a proper makeCurrent could be very dangerous).

IMO before doing any changes we should evaluate a lot of things to get the best out of WebGL:

I'll press this button (http://nooooooooooooooo.com/) if there are breaking changes before submitting the WebVR implementation I've been working on XD

@glennw
Copy link
Member

glennw commented Nov 29, 2016

@MortimerGoro Sure, there's a lot of questions here, and it makes sense to consider all of these before making any changes :)

My main concern at the moment is that executing a vsync command in the backend thread sounds like it could cause performance issues (since the main job of the backend thread is to be preparing command buffers for the render thread to execute, and responding to queries from layout threads quickly).

Having said that, I'm happy to merge the WebVR stuff (after review of course) as is, so long as we have a plan to investigate / resolve issues like I mentioned above, and so long as it doesn't have any adverse effects when WebVR isn't active.

Apart from this PR, is there a lot more code to come that directly affects WR?

@MortimerGoro
Copy link
Contributor Author

MortimerGoro commented Nov 29, 2016

Having said that, I'm happy to merge the WebVR stuff (after review of course) as is, so long as we have a plan to investigate / resolve issues like I mentioned above, and so long as it doesn't have any adverse effects when WebVR isn't active.

@glennw Of course. I'd love to help investigating the best way of fixing that issue and improving WebGL performance.

Apart from this PR, is there a lot more code to come that directly affects WR?

WR only requires this PR ;) Most of the code will be added in another PR to Servo and in a separate library (https://github.com/MortimerGoro/rust-webvr). I'm looking forward to sharing the blogpost with the description of the overall architecture.

@MortimerGoro
Copy link
Contributor Author

Updated PR. I've narrowed down the interfaces required in WR and moved more code to the WebVR component in Servo.

@kvark I didn't add the VRCompositorHandler instance to RenderOptions because it requires Clone and doesn't play well with the unsized trait.

@glennw
Copy link
Member

glennw commented Nov 30, 2016

@bors-servo r+

@bors-servo
Copy link
Contributor

📌 Commit 7aea7fb has been approved by glennw

@bors-servo
Copy link
Contributor

⌛ Testing commit 7aea7fb with merge b63e135...

bors-servo pushed a commit that referenced this pull request Nov 30, 2016
Add VRCompositor commands for WebVR

Some WebVR commands such as Vsync and SubmitFrame must be called in the WebGL render thread. This PR add traits required to run some render commands from the WebVR implementation. WebVR implementation and vendor VR SDK calls are decoupled using these traits and ipc-channel commands.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/webrender/603)
<!-- Reviewable:end -->
@glennw
Copy link
Member

glennw commented Nov 30, 2016

Opened #607 as a follow up.

@bors-servo
Copy link
Contributor

☀️ Test successful - status-travis

@bors-servo bors-servo merged commit 7aea7fb into servo:master Nov 30, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants