-
Notifications
You must be signed in to change notification settings - Fork 29
Add a ReadWriteInterface for displays. #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Thanks for the PR. My few thoughts from looking over this:
It's not ideal that the
The enum handling is pretty much free. The problem here is that you don't know how applications are actually connecting the display. Some offer many different interfaces, even with different data widths so ideally a
That is only possible in parallel mode which has never reported to exist in the real world (and is also not supported by the driver). The I2C and SPI implementations are write-only:
|
@therealprof Thanks for reviewing.
Can you elaborate on this? Are there some changes in this PR that you feel intertwine the actual driver with the abstraction layer? The
Right, so by adding the generic type
Thanks for that info. |
No, this PR is clean but by having the actual
The do get to choose the ones they want to support right now, but if they want to provide the most flexibility they realistically have to implement most of them. You don't get the benefit of working in a lot of different situations if you don't. The necessity of including multiple data types and also endianess is something we learned the "hard" way. Only the display driver knows which format and endianess the display chip expects but only the display-interface driver knows how to actually deliver the data in that format.
I have not checked what kind of magic your display driver does to make that happen but believe me it's not that simple. There're at least two more display drivers besides |
I 100% agree, it's best to have them implemented here so they can be consumed by multiple display drivers. The only reason I didn't include my implementation (
In the current interface, the The driver should format the pixel(s) needed for the active pixel format and push them down to the
No magic really :). I updated the code to be a bit more clearer as to how it works, see here. impl<T> PixelWriter<u8> for T
where
T: ReadWriteInterface<u8>, The above lets us determine how to write pixel data for an 8-bit interface. Same can be done with a impl<T> PixelWriter<U18> for T
where
T: ReadWriteInterface<U18>,
I reviewed the .send_data(U16BEIter(&mut data.into_iter())) The endianness specifier is pretty much useless, the driver really just want's to send some |
After testing, a closure performs much better than an Iterator, probably due to dynamic dispatch. Iterator performance can be improved if we know the concrete type of the Iterator, but would require passing more types through the chain.
Yeah, but I very much prefer a clean separation from the getgo. Maybe something like an established SPI trait would be a better way to do the proofing.
It does care. The display driver only knows in which format the display expects the format, it does not know which format the
No it is not useless. If your system is little endian but you need the data in big endian format (which is very common nowadays) then you will need to do the conversion somewhere, you can either do it in each display driver which is error prone (and potentially have a superfluous byteswap there because you could actually do the swap in hardware or worse you'll have to swap back) or you run into other weird mismatches. We've been there which is why we have added it in the first place. |
Done, implemented on existing
It is possible we are just thinking about different ways to accomplish the same thing. :) In the end though, the driver wants to send command As for pixel format, from what I've seen, depending on the word size of the interface being used (8-bit parallel, 16-bit, SPI - which usually follows 8-bit) is how the pixel format is determined. In this case, it's important the display interface driver knows the interface's word size. This is how the pixel format detection/writing works here. In fact, in one of the drivers you linked, it is using RGB 565 encoding and here is where it's encoding the pixel. While it works, it doesn't take into consideration differences in encoding if you are on a different interface. The transmission is different depending on 8/16/18-bit modes, for example, and in some cases, the actual pixel format. In ILI9486, if you want to transmit RGB666 on 8-bit, it takes 3 write cycles, and the 2 LSB bits for each write are not used. But, if you implement on the 18-bit interface, you can send one whole pixel in one write cycle (filling up all 18 bits). In the current system, how can you facilitate this?
So, I'm not sure I 100% understand the concern here. When we are sending around (u8, u16, u32) throughout our code, the underlying system endianness is abstracted from us. The system endianness is only relevant when you go and access data directly from memory, probably via some unsafe mechanism. Implementations of |
That is completely irrelevant, we're not talking about bit order here, we're talking about endianess, i.e. how the bytes are ordered in datatypes >
At the moment we can't, there's only 8bit and 16bit support, but others can be added if necessary. Some SPI displays e.g. have a 9bit mode.
No, it is not abstracted. It is implicit. When you assemble pixel data in the MCU it's in native endian (often little endian), the display expects in a certain endian (often big endian). When you send 16 bit values over an 8 bit interface someone has to convert the data (or not). That's the whole reason for the Please have a look at the ST7789 driver which was our implementation playground to make the interface both correct and efficient (in fact we're hitting an |
Yes, I understand, U16LE and U16BE are about byte order and the calls to I think we're at an impasse here as we fundamentally think about this differently, as to how the abstraction should work. |
Yes, internally the MCU uses whatever it uses and that is irrelevant because pixel data will always be in native order. When you push it to the display you may have to change the byte order to accomodate for the required data format (or not which is the idea behind specifying what the data should be). This is not something the display driver should worry about, it simply should format the data as described in the datasheet and let the display interface know what it's supposed to be.
I agree. |
Before we close this out, can you let me know how to implement RGB 6-6-6 on the driver side and support the encoding required for 8, 16, and 18-bit parallel interfaces with the current
|
18bit data needs a new set of enum values |
Yeah, I assumed that would be the suggestion. But in the case of the 16-bit Data Bus for 18-bit/pixel, must the interface read-ahead (in the Also, for the endianness in general, based on how you read bits in here, if you pass a |
The whole point of this API is that it's up to the the
This code is just an optimisation to not change pin state if it's the same as the previous state. That's not the whole 16bit value but whatever part of it you've put on the bus last. |
Yeah, I understand this, but then you are hard coding pixel format in the
I'm not talking about the changed state, I'm talking about how the bits are written out to the data lines (specifically, how each bit is checked to see if it's I see that you are actually using I'll keep this open as I may update this PR to still add "R/W" support, but with the current Chris |
No, the display driver provides the pixel data encoding, the A usable datasheet will always specifies the acceptable data format and how to stuff pixels into it: Does your display actually offer a native 18bit mode? If not, there's no need for a 18bit data mode.
? The line you've highlighted checked whether the bit has changed from the previous byte which was clocked out, maybe you've missed the XOR ?
That would be very welcome. |
Yeah, it does, although I don't have one of those displays in hand.
I was referring to |
That's just to cater for the GPIO API: if the bit is set the GPIO is pulled high, if not it's pulled low. All the changes are clocked in at once when pulling the WRX high. display-interface/parallel-gpio/src/lib.rs Line 211 in 48079b7
|
This change adds a
ReadWriteInterface
for displays that have both read/write capabilities. The interface is actually much different than the current one, so let me know what you think. This has been implemented in an ILI9486 driver I'm working on here: https://github.com/chrismoos/ili9486-driver/blob/master/src/gpio/gpio8.rsHighlights for the change are:
DataFormat
moves from an enum (useable at runtime), to the implementations. In practice, when you wire up an implementation, from compile time, you'll have a fixed mode (GPIO8/16, SPI, etc,.) so this means implementors can focus on the actual formats they support, rather than runtime handling of the enum.read_stream
- this allows for efficient transfer (allowing chip to stay selected), invoking a closure until the caller is done reading, which it signals to the interface with abool
return in the closure.write_iter
- similar toread_stream
, this supports efficient transfer (pipelining) of a stream of bytes to the device.Iterator
is provided automatically forReadWriteInterface
, allowing you to easily iterate over the interface to read bytes.ReadWriteInterface
must only implement the above 2 methods, the trait provides some default "helper" methods to read into a fixed buffer, or write from a fixed buffer.PS @therealprof I saw SSD1306 does have read support, also, so not sure which ones would be "write only". Regardless, this could be combined (the RW) into a super trait if you think that makes the most sense.