You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For an embedded system project at my day job we've found a regression in v2.5.0 that is not present in v2.4.2.
The symptom is that data sometimes reads incorrectly after an lfs_seek, unless either a rewind, sync, or a second seek (to the same location!) is executed first.
We've put together a completely stand-alone reproduction case that demonstrates this with v2.5.0, shows that the bug goes away if you use several workarounds, and also demonstrates that this issue does not occur with v2.4.2. This example just uses memory as mock storage so this can run easily on a host PC, but we found this bug when targeting a real non-volatile memory device that uses these specific settings.
This is a zip file that includes the reproduction program along with a simple makefile and snapshots of LFS v2.5.0 and LFS v2.4.2 that this tests against.
The error seems to always (or at least usually) occur when the previous file position is at certain specific values, like 252 (interesting because our block size is 256 and we're doing 4 byte reads in this test).
It seeks to mostly happens when the current file position is a certain value in relation to the previous file position; we suspect something to do with the caching system, but we're haven't attempted to debug LittleFS itself at this point.
It appears to happen in "bursts". That is, if one read is was, the next read is also likely to be bad. (Again, makes us suspect something with caching.)
Anyway, hope this helps you find and fix this bug. I've used LittleFS in the past and always found it very robust, but we always stress test 3rd-party code and finding this issue was quite a jarring experience. For now we're probably going to move back to v2.4.2 and we'll certainly let you know if we find any other issues. =)
The text was updated successfully, but these errors were encountered:
I tried to recreate this in our system on hardware and wasn't successful. There were some slight differences, but the gist of it is:
Open and write a 512 byte file. I used sequential bytes instead of a random buffer.
page_size and prog_size is 0x100
block size is 0x10000
cache_size is 0x100
lookahead_size is 16
metadata_max
read, prog, and lookahead_buffers are all statically allocated.
@lrodorigo is correct. @wjl, sorry this flew under my radar, thanks for creating the issue.
I've put up a PR to fix this here: #1058, I believe this fixes the issue you and @lrodorigo are seeing. Feel free to let me know if I'm missing anything.
For an embedded system project at my day job we've found a regression in v2.5.0 that is not present in v2.4.2.
The symptom is that data sometimes reads incorrectly after an lfs_seek, unless either a rewind, sync, or a second seek (to the same location!) is executed first.
We've put together a completely stand-alone reproduction case that demonstrates this with v2.5.0, shows that the bug goes away if you use several workarounds, and also demonstrates that this issue does not occur with v2.4.2. This example just uses memory as mock storage so this can run easily on a host PC, but we found this bug when targeting a real non-volatile memory device that uses these specific settings.
This is a zip file that includes the reproduction program along with a simple makefile and snapshots of LFS v2.5.0 and LFS v2.4.2 that this tests against.
LittleFS-Bug20220919.zip
This text file shows the output we get when running this test.
LittleFS-Bug20220919.log.txt
Some things we noticed right away are:
Anyway, hope this helps you find and fix this bug. I've used LittleFS in the past and always found it very robust, but we always stress test 3rd-party code and finding this issue was quite a jarring experience. For now we're probably going to move back to v2.4.2 and we'll certainly let you know if we find any other issues. =)
The text was updated successfully, but these errors were encountered: