Skip to content

Conversation

@geky
Copy link
Member

@geky geky commented Nov 26, 2019

This is a fork of #185:

-> call lfs_dir_fetchmatch with ftag=-1 in order to set the invalid bit and never let the function match a dir

Related to #184

-> call lfs_dir_fetchmatch with ftag=-1 in order to set the invalid bit
   and never let the function match a dir
Warning on MDK v5.27.1
Found by geniusgogo
@thrasher8390
Copy link

@geky, do you have a timeline as to when 2.1.5 "next patch" will be released?

@geky
Copy link
Member Author

geky commented Jan 30, 2020

This PR is on v2.1.4, which is already released. Or are you referring to a different set of changes?
https://github.com/ARMmbed/littlefs/releases/tag/v2.1.4

@thrasher8390
Copy link

thrasher8390 commented Feb 5, 2020

Thanks for all of your help through this @geky.

I'm trying to understand exactly what caused this issue and was hoping you could help me out.

On lfs 2.0.4 (had this bug) we saw that during lfs_dir_fetch_match a tag was being read out as 0 which caused a nullptr callback which resulted in a Hard Fault on our product (No bueno :( )

We recently updated to lfs 2.1.4 and have confirmed that we no longer call the nullptr callback and so our device does NOT Hard Fault :) YaY for bug fixes :)!

My question. Is the tag value of 0 expected? It seems that since we updated to lfs 2.1.4 our files that were previously failing during writes are now succeeding. This was not my expectation.
Can you help my understand the tag value? I'm trying to rule out NVMEM corruption.

@geky
Copy link
Member Author

geky commented Feb 11, 2020

So, I've looked into this more, and it's not an issue caused by a faulty block device.

Normally, the tag value of 0 is invalid. This should never appear on disk in a valid metadata-block.

But the keyword there is valid metadata-block. It's possible we can run into a tag value of 0 in an invalid metadata-block. The problem is that lfs_dir_fetchmatch runs callbacks on tags before it decides whether or not the block is valid, so we can end up calling the NULL pointer on an invalid tag in an invalid block.
https://github.com/ARMmbed/littlefs/blob/ce2c01f098f4d2b9479de5a796c3bb531f1fe14c/lfs.c#L919-L920

Why would littlefs be trying to fetch an invalid metadata block? This is a normal part of littlefs's operation. See, littlefs stores metadata in "metadata-pairs", which are two metadata blocks where only one needs to be valid. This lets us erase the only block in the metadata-pair without temporarily corrupting the filesystem (permanently corrupting if we lose power).

To decide which block to use, littlefs just tries to fetch both, choosing the block that 1. is valid, and 2. has the largest revision count.

So, while tag 0 is invalid, it can in theory appear on a bad metadata-block. The tag -1, however, isn't even possible to encode. The top bit is used for another purpose and gets masked out before the lfs_dir_fetchmatch logic.
https://github.com/ARMmbed/littlefs/blob/ce2c01f098f4d2b9479de5a796c3bb531f1fe14c/lfs.c#L819

Hopefully that explains a bit about what's happening. It's counter-intuitive, but it's expected for lfs_dir_fetchmatch to handle invalid tags. After lfs_dir_fetchmatch tag 0 should not appear in the system, though this is more a courtesy than a hard rule.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants