-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
display: add display_flush #79936
base: main
Are you sure you want to change the base?
display: add display_flush #79936
Conversation
1a8083d
to
5a4c54b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Introducing an API change should be accompanied with at least one driver that implements it.
That being said, this would tie the display driver closely to the lvgl way of working, which might require some more discussion.
I don't think there is an upstream driver that requires this right now :/
I kind of disagree with this (that it ties to LVGL, i'm of course open to discussions). I think every display works in frames, and every frame has an end. An alternative API would be |
The problem I have is that regardless of LVGL, the |
46b2de1
to
025da2b
Compare
Implemented it in |
025da2b
to
5dc15d0
Compare
Is the name bothering? Would EDIT: added a couple of options and thoughts in the PR description |
2f35a81
to
642835f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR should be split into different commits:
- Introduce API function
- Implement in dummy driver
- Calling from LVGL
@danieldegrasse @jfischer-no can you weigh in here? |
0937b71
to
1e6150d
Compare
I am not contradicting myself: 1 PR, 3 commits please |
Aaaah. Now it makes sense. Thanks for clarifying. |
Implements the `display_flush` API function, introduced in zephyrproject-rtos#79936. Signed-off-by: Martin Stumpf <martin.stumpf@vected.de>
Implements the `display_flush` API function, introduced in zephyrproject-rtos#79936. Signed-off-by: Martin Stumpf <martin.stumpf@vected.de>
2eddb88
to
d653d77
Compare
Why are the commits and PR comments from different accounts/persons? |
Work email / private email. Is this a problem? I once tried (in a different PR) with |
@Finomnis it should be |
d653d77
to
7f47b34
Compare
@JarmouniA Like this? |
Introduces support for double-buffered/latched displays. Currently, every write has to be presented to the user immediately, which negates the advantage of latched displays to prevent frame tearing. Signed-off-by: Martin Stumpf <finomnis@gmail.com>
Implements the `display_flush` API function for the `display_dummy` driver. Signed-off-by: Martin Stumpf <finomnis@gmail.com>
In frames with multiple writes (officially supported through `CONFIG_LV_Z_VDB_SIZE`) the display needs to be signalled that the current frame is over and the content should be displayed. This allows displays to present the UI without tearing artifacts. Signed-off-by: Martin Stumpf <finomnis@gmail.com>
7f47b34
to
540b3ea
Compare
I'd like to better understand exactly what the case for a flush would be. Typically when using LVGL with a 1/10 buffer size, you need to write to the display graphics RAM when the buffer is presented, because the only RAM on the system actually capable of storing the full framebuffer is the graphics RAM on the display controller. In other cases (like the eLCDIF), the driver must define an internal framebuffer, and update it with the contents of the partial framebuffer each time LVGL sends a new one. Typically the way display controllers deal with the possibility of tearing is via a TE (tearing effect) signal, which indicates when the display controller starts scanning a new frame from graphics ram to the LCD. Perhaps your FPGA could use something similar? |
@danieldegrasse There's actually two sources of synchronization that could cause tearing:
Even with a Tearing Effect/VSYNC signal, the chunk tearing would not be eliminated. The information about which write belongs to which frame, and at which write a new frame begins is simply missing from the display driver interface. If LVGL renders in 1/10th screen chunks, all the driver sees is a continuous stream of To give you more detail about our concrete usecase, we have an FPGA with two frame buffers on it. The scanning speed of the FPGA is fixed (the UI is just an overlay to an FPGA-internal video stream), so we cannot sync the display to the UI, we need to sync the UI to the display. Further, our internal RAM is not enough to hold an entire frame, so we need to utilize LVGL partial renders. Our procedure is:
For step 3, we need two sync sources. We need to know:
The Hopefully that kind of explains the problem. |
Yeah, this is a great explanation- thank you for laying it out. It seems like LVGL has |
Same thought. It would also be backwards compatible if we added it to the |
@danieldegrasse there's also another reason for the extra api - the "flush" does not require the ui buffer any more. So it can happen after we give that buffer back to LVGL, and LVGL can continue rendering while we do the page flip and waiting for vsync. |
Thanks for the explanation @Finomnis. Change itself looks good. Can you maybe add a short sentence about this to the v4.0 release notes? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Considering this more, we need to clearly define when a user needs to call the display_flush
API. I agree that there is potentially a need for this change, but we have an issue here because it fundamentally represents a different method of managing displays than we currently have in Zephyr today, right? Currently the user can assume that calling display_write
will result in data being written to their display, but with this change that is no longer true.
If I understand the description of your downstream application right, no data would actually appear on the display until display_flush
was called.
This changes display_write
to effectively "queue" writes to the display, then display_flush
passes those writes to the display.
One way I could see handling this would be to change the documentation around display_write
to state that writes to the display may occur immediately or be queued, and a user must call display_flush
to send all queued writes to the display.
I don't like this though, because this means all downstream applications need to start calling display_flush
to use the API consistently. In some ways I would prefer if we added a parameter to the display_write
API call that acts as a hint to the driver that the display should be flushed. This would break downstream applications, but it would do so in a way that was easy to fix. I could see users easily missing the new requirement to call display_flush
, since it wouldn't be required for most displays, just required by the API definition.
@faxe1008 or @pdgendt do you have any thoughts here? I would also add that I agree if we make this API change there needs to be a full implementation of it. I think this could actually be done in the eLCDIF driver (or another driver using full framebuffers) using logic like so:
- if
display_write
is called, simply queue the write into a framebuffer - when
display_flush
is called, pass that framebuffer to the eLCDIF hardware so it renders to the screen.
To be clear, I don't have an issue with the concept of changing the display API to deal with this issue- I just want to make sure we fully understand the implications of doing so, and communicate the change to downstream users effectively.
@danieldegrasse I agree that it is kind of ambigous if a write takes immediate effect or not. I like the extra API since it does not break downstream. Would it maybe be feasable to add a flag to the display capabilities |
Yeah, I like this a lot. @Finomnis would you like some help getting this supported in a driver like the eLCDIF? We need to have a case of this being used in tree, and need to update the display sample ( |
I kind of thought the "double buffer" capability is exactly that, but it turns out to be something weird and obscure. There's also a use of lv_display_flush_is_last in another eink display case already where blanking gets used in an (imo) incorrect way. This could also be removed after this change and instead be implemented through a flush. Should we rework those? What's the point of the existing double buffer capability? |
Docs state that "The display has two alternating RAM buffers", which imo is not necessarily the same thing. I could not find any intree usage of this flag. Adding a flag with an unambigous explanation would be good I think. |
IIRC some EPD use double buffer to determine what area is to update.
Yes, that can be used with some EPD displays that supports partial writes to display controller memory. Not sure why LVGL API is mixed here, this is just some graphics library.
SCREEN_INFO_DOUBLE_BUFFER? I do not see it used in the tree. |
Fixes #79798.
Indicates to the display that all previous writes should be flushed to the display.
This adds support for display drivers that internally contain a double-buffering or latching mechanism.
Currently the only way to implement those displays is to present all
write
s to the user immediately, which could cause tearing artifacts for multi-write renders. (it's quite common for UI managers to render to partial framebuffers in low-memory situations. LVGL, for example, recommends 1/10th of the screen size)Open questions:
display_flush
(current PR code, conveys the intend pretty well that this is optional for displays to implement)display_finalize_frame
(might be a little too verbose)display_present
(maybe a little too strong, sounds like it is a must-use for displays)display_flip
(probably not because could be confused with x or y pixel order setting, like, 'flipping' the image)display_pageflip
(might be too usecase specific)display_show
(like present, might be too strong)