-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[video_player_web] Add an alternate mode that renders images directly into the canvas #6286
Conversation
It looks like this pull request may not have tests. Please make sure to add tests before merging. If you need an exemption to this rule, contact "@test-exemption-reviewer" in the #hackers channel in Chat (don't just cc them here, they won't see it! Use Discord!). If you are not sure if you need tests, consider this rule of thumb: the purpose of a test is to make sure someone doesn't accidentally revert the fix. Ask yourself, is there anything in your PR that you feel it is important we not accidentally revert back to how it was before your fix? Reviewers: Read the Tree Hygiene page and make sure this patch meets those guidelines before LGTMing. |
Do you need both rendering modes at the same time, or do you always use the image-to-canvas mode? If only the canvas mode: Should this be a separate package of (Also, qq: does this play sound?) |
@ditman this does play sound. I think this way of rendering should actually be the default and the platform view style should be deprecated and removed eventually it's totally crashes the renderer if you have more than 8 videos and doesn't work with any of flutter's built in features like shaders. I only put a configuration option to switch the behavior because I thought people might be scared of change until it has some real world use. If people really want a VideoPlayer HTML tag, they can do it with HtmlElementView. The web video player is unusable in its current state for a lot of people like us. |
@ditman I believe these rendering issues are unique to the web player, as the other platforms are already doing something like this to render the video into textures on their respective platforms, so making this the default way that videos work would bring the platforms a bit more inline with each other in terms of behavior. I don't know of a good reason to keep the old behavior around forever or as the default. Might actually be better if the default of the setting was true, or the old behavior was put into a legacy player package. That would be quite a bit of code duplication just for a toggle on how the rendering part happens though. At the end of the day all paths go through a video element. Maybe the renderer widget itself could be abstracted so packages could choose a rendering strategy instead of having a boolean toggle? Might be a little cleaner. |
@ditman refactored a bit to allow rendering logic to be overridden without swapping out the whole package. This would allow the newer or older portions to be moved into another package without a lot of code duplication. Is there a good reason to retain the old strategy as a default? I think for safety it's nice to allow configuration in case someone has reasons (like maybe a case where you are really low on texture memory in safari and are going to be able to avoid having any overlay groups created), but I don't know of a good reason to use an HtmlElementView over rendering into the canvas in general, as overlay groups cause a lot of issues in general:
The reason the |
@jezell what browser are you targeting? I'm guessing Safari? This can't be the default method because I guess it can be made to fallback to the html element in older browsers, but how can this be more performant than the native tag? |
@ditman performance is secondary to the fact that overlay groups cause rendering to entirely stop working. Regardless, the performance issue isn't the native video element itself, it's that the native video element is inserted into an overlay group and the overlay groups cause issues. That's why the engine has a hard stop a a really low number. Nothing should ever depend on an overlay groups or an HtmlElementView in a framework intended to be used generically. If overlay groups didn't exist, the performance would be that of the native element, but overlay groups do exist. To avoid the problems with overlay groups and use a native element is possible, but only if you avoid using HtmlElementView entirely and just render the element on top of the canvas manually. IMO, that's probably how overlay groups should work in the first place (rendering always 100% below or 100% above the canvas), but that's not how they work. At the moment, there is a large cost to overlay groups so anything that forces you to use them will make you pay a performance price. |
@ditman It wouldn't be a bad idea to make the rendering strategy that uses the native player avoid overlay groups entirely. Then you could pick between rendering into the canvas at a reasonable cost or rendering on top of the canvas at no cost. That would still be better than not being able to ever put a video on a page that happens to have 8 SVGs or 8 webrtc cameras on it. |
Most frameworks use requestAnimationFrame when requestVideoFrameCallback isn't available. It's a reasonable strategy, since RAF is used all over the place, including in flutter itself as a fallback for various missing APIs. |
@jezell I'm not opposed to merging this, because I like that it brings new features that were impossible before (filtering, probably access to the stream of frames) but it's a lot of "experimental" APIs (and more that could be used, like the Web Codecs API, I guess), rather than |
@ditman added firefox support via RAF, works fine for playing full screen videos in Firefox for me in places where they crash due to overlay group limitations on the current player. |
@ditman Thanks. I understand that this is a bit cutting edge and scary, but it does work better than the current method, and I think providing the fallback in the plugin if you prefer the old behavior should help address the concern -- or at the very least including the new logic, but making you turn on this as "experimental" to start. |
There's some dart analyze issues, but those should be straightforward to fix. A couple of tests are complaining, though:
|
Another issue with this new technique is that it exposes the video_player_web to the same CORS problems that the NetworkImage has in the canvaskit renderer: videos need to satisfy CORS restrictions, which is not necessarily true with the See: flutter/flutter#107187 (comment) I guess we must (need to?) keep the platform-view based implementation for these cross-domain cases. |
…g, capture bitmap at optimum width / height of video
@ditman yeah, the crossOrigin is set to anonymous right now to at least allow some things through, but seeing that NetworkImage has same problem, at least it's consistent with other assets. I don't think there's a good reason to forbid using the platform layer upon request, but if you opt into it, at least then you'd be choosing the risky behavior and it could be properly documented to make sure you know what you are signing up for. I'd guess the average consumer doesn't know they have to be very careful about overlay groups until it's too late and their renderer is misbehaving. Could probably also be detected and auto fallback, or maybe defaulted based on the renderer you are using? If using the HTML renderer maybe you'd prefer the video element because you don't have the webgl limits to contend with? Since that's the default in places like mobile safari that are constrained for texture space it could actually be a good thing to default to the old way there. |
We expect to introduce platform view versions of playback for all of them over time, because the texture approach doesn't work with DRM-protected videos.
I would expect the same issues with DRM on the web. This send to be explicitly saying that accessing frames of DRM-protected videos can't be relied on. |
Good point. Hopefully all platforms will support both rendering into a texture or using a platform view depending on the needs of the app then. At least for us it would not be optimal to get reduced functionality for all videos because some videos can be DRM'd. Likely DRM'd content can get away with the limitations of platform views on webs as typically apps like Netflix don't let you watch more than one video at a time. Other cases like WebRTC, collaboration apps, etc. often do end up with a lot of videos on the screen at once and none of them are DRM'd, which is where we are currently running into trouble with platform views on web. Another possibility I was thinking could help here on web (at least for videos) would be to only use the platform view when the video is playing, and to swap to a captured RawImage when it is not playing. Then, as long as you don't need multiple videos playing at once, you are only burning a single layer. Wouldn't be a great solution for WebRTC scenarios, but those aren't covered by this plugin anyway. |
That is my current expectation for the long term state, yes. |
…dering mode at runtime, allow manual configuration of rendering strategy
Updated PR based on feedback in this thread. In auto render mode (default) which will choose a rendering strategy dynamically at runtime. A specific strategy can also be selected to so that people can choose platform views if they want things like DRM to work or texture mode if they want shaders to work and no platform layers to be used. In auto mode:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jezell we're discussing internally if there's anything else that could be exposed through ui_web
to make this more efficient/better integrated with the renderer mechanisms; (for example, there's a new skwasm
renderer coming that attempts to do off-thread rendering, and I'm not sure how this would play with it), but also the way the renderer is determined, or the browser detection code, could be vended from the engine so everything is guaranteed to stay in sync with whatever it says.
One quick thing, could you split the _AdapterVideoPlayerRenderer
to separate file(s), so we keep the video_player_web.dart
as "the place that has the platform channel overrides" and the new stuff that knows how to render the video Widget in different files?
Also, there's no need for classes to be "private" (prefixed by _) in this package, because people shouldn't be using it directly on their code.
} | ||
|
||
bool get rendererCanvasKit { | ||
return hasProperty(web.window, 'flutterCanvasKit'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Should be provided by the engine, also take skwasm into consideration?)
final web.ImageBitmap newSource = | ||
await promiseToFuture<web.ImageBitmap>( | ||
web.window.createImageBitmap(widget.element)); | ||
final ui.Image img = await ui_web.createImageFromImageBitmap(newSource); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the fastest way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ditman It's definitely not the fastest way the browser supports, but it is the only way that's currently exposed to get a frame into the renderer. I have some suggestions here in this thread:
flutter/flutter#144815 (comment)
Ideally, in browsers where a VideoFrame is available, it could be transferred to the worker for rendering. There are some constraints with VideoFrames though that you can't hold onto them without blocking the decoding, so might require some additional plumbing to make work properly. Maybe WebCodecs would be the best option at some point: w3c/webcodecs#43. But also not doable without some work in the flutter engine.
final web.CanvasRenderingContext2D context = | ||
canvas!.getContext('2d')! as web.CanvasRenderingContext2D; | ||
|
||
context.drawImage(widget.element, 0, 0, canvas!.width, canvas!.height); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the fastest way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ditman worth some benchmarking, but other than the native element, this is probably the fastest path to get a video frame rendered in FF and Safari. It avoids the transfer to the worker, which the slow path in FF / Safari and avoids the GC churn caused by getting bitmaps every frame.
This mode only exists to allow the player to only use a platform layer while playing and use a texture while not playing. Ideally, there wouldn't be a need for the canvas element because ideally we could swap from a texture straight to the video element when it starts playing. However, the challenge is that you can't move a video element in the DOM without resetting it, so if you try to move the video element into the platform layer when it is created, it will immediately stop playing.
I haven't experimented with invisible platform layers yet, but it's possible that an alternative could be using an invisible platform layer for the video when it's not playing and then swapping it to be a visible layer later. If the video element was not removed from the DOM during a swap from invisible to visible, there's a possibility it could work, but depends on whether Chrome would allow us to get the frames while the video was in the invisible layer and whether the element would get recreated by that swap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ditman worth some benchmarking, but other than the native element, this is probably the fastest path to get a video frame rendered in FF and Safari. It avoids the transfer to the worker, which the slow path in FF / Safari and avoids the GC churn caused by getting bitmaps every frame.
This mode only exists to allow the player to only use a platform layer while playing and use a texture while not playing. Ideally, there wouldn't be a need for the canvas element because ideally we could swap from a texture straight to the video element when it starts playing. However, the challenge is that you can't move a video element in the DOM without resetting it, so if you try to move the video element into the platform layer when it is created, it will immediately stop playing.
I haven't experimented with invisible platform layers yet, but it's possible that an alternative could be using an invisible platform layer for the video when it's not playing and then swapping it to be a visible layer later. If the video element was not removed from the DOM during a swap from invisible to visible, there's a possibility it could work, but depends on whether Chrome would allow us to get the frames while the video was in the invisible layer and whether the element would get recreated by that swap.
VideoRenderMode mode = VideoRenderMode.auto; | ||
} | ||
|
||
class _AdapterVideoPlayerRenderer extends StatefulWidget { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Extract to separate files)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This whole file should be provided by flutter web engine, so the decision is always in sync?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ditman yeah, that file is in the browser engine, but can it be imported from here? It would definitely be ideal to not have to double implement it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, it can't be imported now, it needs to be reexported from the ui_web
package to work.
From triage: @ditman What's the status of this PR? Is it blocked by potential SDK changes? |
From triage: @ditman Ping on the question above? |
Sorry I missed the earlier message! There's at least two relevant changes to the SDK:
The priority right now is in doing the work to move to canvaskit (js)/skwasm (wasm). Platform views are staying, so the current implementation of the engine stays. I'm still unsure on how the image copying would play with the rest of the framework (but currently there's a pattern where people do custom rendering to bytes in Flutter, and I guess that'd need to continue working) I think this should land as an opt-in feature for the video player web, I'll give this some testing after I wrap up some unrelated stuff. |
@ditman React Native Skia is using MakeLazyImageFromTextureSource for a faster path here. I really think that API should land in web_ui to allow this to be improved by binding the WebGL texture straight to the video element. |
Are we blocked here? Or still in discussion... |
@kevmoo Fix requires some changes that are in main branch, how does the plugin release cycle work? I had pointed out the potential problem here: I think video_player_web is kind of a bad place to put the lower level video rendering logic because there are a lot of places that need the same logic, and video_player_web is way higher level, so it can't be used by camera_web or livekit_client. Not sure what approach the team wants to go with here. It's easy enough to hack it in to all these places manually, but feels wrong having to duplicate the code and then keep updating it as the web engine gets better. |
Please see https://github.com/flutter/flutter/blob/master/docs/ecosystem/contributing/README.md#supported-flutter-versions for this repository's support model. |
(triage) hi @jezell , are we still blocked? or do you know what is the status of this pr? |
Thank you for your contribution. Since there are outstanding comments but the PR hasn’t been updated in several months, I’m going to close it so that our PR queue reflects active PRs. Please don't hesitate to submit a new PR if you have the time to address the review comments. Thanks! |
This allows the web video player to avoid issues relating to having too many active overlaygroups. When this mode is enabled as follows:
The video player will use createImageFromImageBitmap to grab frames as they are available and render them using RawImage instead.
Fixes:
flutter/flutter#144591
If you had to change anything in the flutter/tests repo, include a link to the migration guide as per the breaking change policy.
Pre-launch Checklist
dart format
.)[shared_preferences]
pubspec.yaml
with an appropriate new version according to the pub versioning philosophy, or this PR is exempt from version changes.CHANGELOG.md
to add a description of the change, following repository CHANGELOG style.///
).If you need help, consider asking for advice on the #hackers-new channel on Discord.