Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A new non volatile storage system #77930

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

rghaddab
Copy link
Contributor

@rghaddab rghaddab commented Sep 3, 2024

Implements RFC #77929

This is the a new storage system that is designed especially to work on non erasable devices but supports as well erasable devices.

It is intended that some features described in the RFC are not implemented yet to make it easier to review for contributors that are familiar with NVS filesystem.

Already included feature :

  • Simple Key-Value Storage (i.e. no file/folder abstractions)
  • 32-bit IDs
  • Metadata entries can also store small (1 to 8 bytes) data entries
  • Support for no-erase-required flash drivers (i.e. RRAM, MRAM, etc)

Features in progress:

  • No limits in value length
  • Support for entries in multiple formats
  • CRC-24 for entries that require it (part of multiple format entries)
  • Efficiency for Settings subsystem
  • Efficiency for Secure Storage subsystem

@nordicjm nordicjm linked an issue Sep 4, 2024 that may be closed by this pull request
@butok
Copy link
Contributor

butok commented Sep 4, 2024

There is no a ZMS User Documentation.

@rghaddab
Copy link
Contributor Author

rghaddab commented Sep 4, 2024

There is no a ZMS User Documentation.

Yes there is no Documentation yet as this could change a lot from its initial proposal.
The PR is still in Draft mode and some work is still in Progress.
I will add some information about what is already included in this PR and what is coming next

@rghaddab
Copy link
Contributor Author

that comments on zms systems in the field that do not have enough space for a gc done ate.

Actually thank you for this review.
Indeed, this operation is not needed for ZMS and we don't need this check as GC ATE should be present in all sectors.

I updated this in the last revision

@Laczen
Copy link
Collaborator

Laczen commented Oct 22, 2024

For Settings subsystem there will be also another coming optimization for the ZMS backend.

And this will be exactly where the problem of code duplication pops up. Every change or optimization you make to the ZMS settings backend will also need to be evaluated for the nvs settings backend, and if appropriate changed in both locations.

@rghaddab
Copy link
Contributor Author

And this will be exactly where the problem of code duplication pops up. Every change or optimization you make to the ZMS settings backend will also need to be evaluated for the nvs settings backend, and if appropriate changed in both locations.

I won't change anything in the NVS backend for Settings.
NVS is a stable storage system. You are the code owner of it and you know that it is too risky to change things there.

@Laczen ZMS is not a replacement of NVS, each one of them has its own usecases and hw requirements. mixing both of them will make the final solution worse than both of them.

@Laczen
Copy link
Collaborator

Laczen commented Oct 22, 2024

I won't change anything in the NVS backend for Settings. NVS is a stable storage system. You are the code owner of it and you know that it is too risky to change things there.

I don't consider myself a code owner of NVS. I have been the main developper and as such I know and understand it's limitations. Anyone can propose changes to it and if these changes improve the behavior and do not break compatibility or introduce possible issues I will approve them. As a collaborator of the fs subsystem I feel responsible for helping PR's that are related to this. On some area's I will not take any initiative (e.g. POSIX related stuff) because my knowledge about this is too limited. When ZMS would be merged it would make me just as responsible for ZMS as for NVS, although I might not take any initiative when I would consider my knowledge too limited.

When you are contributing to the settings backend you should feel the same responsibility, when you apply optimisations to a ZMS backend you should also evaluate if these changes can also be used in NVS. This way the best solution for the zephyr community can be guaranteed.

@rghaddab
Copy link
Contributor Author

when you apply optimisations to a ZMS backend you should also evaluate if these changes can also be used in NVS. This way the best solution for the zephyr community can be guaranteed

I do not think that NVS and ZMS will be used the same way through Settings subsystem.
But of course, for obvious optimizations, I would modify both of them.
But for functional optimizations it is too risky to change NVS as it is a stable storage system used maybe by millions of devices.
However there is no risk in optimizing ZMS backend.

carlescufi
carlescufi previously approved these changes Oct 23, 2024
This adds the documentation for the Zephyr Memory Storage system.

Signed-off-by: Riadh Ghaddab <rghaddab@baylibre.com>
rlubos pushed a commit to rlubos/sdk-zephyr that referenced this pull request Oct 23, 2024
List of added tests :
- zms.test_delete
- zms.test_zms_cache_collission
- zms.test_zms_cache_gc
- zms.test_zms_cache_hash_quality
- zms.test_zms_cache_init
- zms.test_zms_corrupted_sector_close_operation
- zms.test_zms_corrupted_write
- zms.test_zms_full_sector
- zms.test_zms_gc
- zms.test_zms_gc_3sectors
- zms.test_zms_gc_corrupt_ate
- zms.test_zms_gc_corrupt_close_ate
- zms.test_zms_mount
- zms.test_zms_write

Upstream PR: zephyrproject-rtos/zephyr#77930

Signed-off-by: Riadh Ghaddab <rghaddab@baylibre.com>
(cherry picked from commit 3f31a7db7fe70ee92a0fe63c07fb7704abbb52b3)
(cherry picked from commit 04283c6)
rlubos pushed a commit to rlubos/sdk-zephyr that referenced this pull request Oct 23, 2024
This adds a user application that shows the usage of ZMS
The sample app shows three main functions of ZMS:
- read/write/delete key/value pairs
- fill all storage and delete it
- calculate free remaining space

Upstream PR: zephyrproject-rtos/zephyr#77930

Signed-off-by: Riadh Ghaddab <rghaddab@baylibre.com>
(cherry picked from commit 9106be03f65f25e5d32920bf1443f05e43a54731)
(cherry picked from commit 721997c)
rlubos pushed a commit to rlubos/sdk-zephyr that referenced this pull request Oct 23, 2024
This adds the documentation for the Zephyr Memory Storage system.

Upstream PR: zephyrproject-rtos/zephyr#77930

Signed-off-by: Riadh Ghaddab <rghaddab@baylibre.com>
(cherry picked from commit 7cc32ad6b45548b7173cbfa7b98d7bdff986f671)
(cherry picked from commit 1c85337)
rlubos pushed a commit to rlubos/sdk-zephyr that referenced this pull request Oct 23, 2024
ZMS is the abreviation of Zephyr Memory Storage.
It is a storage developed to target especially the non erasable devices.

The new memory storage system inherit from the NVS storage multiple
features and introduce new ones :
* Inherited features :
 - light key-value based storage
 - cache for entries
 - Wear Leveling of flash memory
 - Resilience to power failures
* New features :
 - cycle counter for non erasable devices (instead of erase emulation)
 - Keys up to 32-bit
 - Built-in support of CRC32 for data
 - Small size data (<= 8 bytes) integrated within entries

Upstream PR: zephyrproject-rtos/zephyr#77930

Signed-off-by: Riadh Ghaddab <rghaddab@baylibre.com>
(cherry picked from commit 0feaf588f9628e4279001eb390ac07ae1e818f9e)
(cherry picked from commit 14c65e3)
@de-nordic de-nordic added the DNM This PR should not be merged (Do Not Merge) label Oct 23, 2024
@de-nordic
Copy link
Collaborator

de-nordic commented Oct 23, 2024

@Laczen Following Zephyr contribution process, described by Contributor Expactations, chapter PR Technical Escalation, your Request for Change (#77930 (review)) is discarded,
for following reasons:

@Laczen in your "Request for Changes" you have not pointed any changes or given assistance to the author of this PR that could be followed to improve the PR.

The PR here does not remove NVS nor does it take architectural ownership on it from you.

The PR here provides a parallel solution bringing improvements that would made NVS work, or work better, for users with devices so far not supported.

You have also multiple times stated that NVS is EOL:

Community members tried to carry on support for new devices, for their customers, but were refused help and resolved to development of a derivative of NVS,
that does not remove it, break it, nor denies co-existence.

As far as I understand users are still free to select either solution to support their devices and will have to do understand consequences of their choice and support cost they will have to carry, as in all choices made for their software configuration.
I see no reason to lock community out of having a solution that, although for some seems to not fulfill their requirements, will be good enough to provide support of other features they or their customers need.
In conclusion: @Laczen the NACK is discarded and, with accordance to PR Technical Escalation, I am marking the PR as DNM for one business day, for you to revisit your Change Request.

Copy link
Collaborator

@tomi-font tomi-font left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a look at the interface header (zms.h). Pretty much all my comments are minor (to different extents). Some comments apply to multiple occurrences of the same thing in that file. Feel free to do what you want with most of them, they're mostly just nitpicks.

Comment on lines +1 to +6
/* ZMS: Zephyr Memory Storage
*
* Copyright (c) 2024 BayLibre SAS
*
* SPDX-License-Identifier: Apache-2.0
*/
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think typically we rather have the copyright and license first, so something like:

Suggested change
/* ZMS: Zephyr Memory Storage
*
* Copyright (c) 2024 BayLibre SAS
*
* SPDX-License-Identifier: Apache-2.0
*/
/* Copyright (c) 2024 BayLibre SAS
*
* SPDX-License-Identifier: Apache-2.0
*
* Zephyr Memory Storage (ZMS)
*/

Comment on lines +10 to +14
#include <zephyr/drivers/flash.h>
#include <sys/types.h>
#include <zephyr/kernel.h>
#include <zephyr/device.h>
#include <zephyr/toolchain.h>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: Put all the <zephyr/*.h> includes together?

#endif

/**
* @brief Zephyr Memory Storage (ZMS)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that you don't need that brief when setting the name of the group in the defgroup line (as it doesn't give any additional information).


/**
* @brief Zephyr Memory Storage Data Structures
* @defgroup zms_data_structures Zephyr Memory Storage Data Structures
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could then just use ZMS for simplicity (as this belongs to the zms group)?

*/

/**
* @brief Zephyr Memory Storage File system structure
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing a capitalized S (System or Structure) for consistency.

*
* @return Number of bytes free. On success, it will be equal to the number of bytes that can
* still be written to the file system.
* Calculating the free space is a time consuming operation, especially on spi flash.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Calculating the free space is a time consuming operation, especially on spi flash.
* Calculating the free space is a time-consuming operation, especially on SPI flash.

*
* @return Number of free bytes.
*/
size_t zms_sector_max_data_size(struct zms_fs *fs);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to explicitly state current (/curr) or active in the function name?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually it should be zms_curr_sector_free_space

ssize_t zms_calc_free_space(struct zms_fs *fs);

/**
* @brief Tell how many contiguous free space remains in the currently active ZMS sector.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @brief Tell how many contiguous free space remains in the currently active ZMS sector.
* @brief Tell how much contiguous free space remains in the currently active ZMS sector.

/**
* @brief Tell how many contiguous free space remains in the currently active ZMS sector.
*
* @param fs Pointer to the file system.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(minor) I like this version more. But it's also inconsistent with the functions declared above. Could be nice to have them all aligned.

* @note The garbage collector is called on the new sector.
*
* @warning This routine is made available for specific use cases.
* It collides with the ZMS goal of avoiding any unnecessary flash erase operations.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* It collides with the ZMS goal of avoiding any unnecessary flash erase operations.
* It collides with ZMS's goal of avoiding any unnecessary flash erase operations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: File System area: Samples Samples area: Storage Storage subsystem DNM This PR should not be merged (Do Not Merge)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

New non-volatile storage system