Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support erasure codes in object service #526

Open
alexvanin opened this issue May 17, 2021 · 3 comments
Open

Support erasure codes in object service #526

alexvanin opened this issue May 17, 2021 · 3 comments
Labels
discussion Open discussion of some problem I1 High impact S1 Highly significant U4 Nothing urgent

Comments

@alexvanin
Copy link
Contributor

No description provided.

@alexvanin
Copy link
Contributor Author

alexvanin commented Feb 11, 2022

Erasure codes can be implemented on a containers with REP 1 policy. One replica doesn't make sense in terms of netmap placement algorithm, so it can notify node or the client, that the objects in this container are split with erasure encoding scheme. Details of that scheme may be stored in container attributes.

Uploading / downloading scheme will be different. During payload split, we create new object with actual payload and parity data. Those objects may be linked the same way as they linked now with child links and zero-object. All these objects are stored in one copy as REP 1 describes by object placement rules.

@roman-khimov roman-khimov added U4 Nothing urgent S1 Highly significant I1 High impact and removed triage labels Dec 21, 2023
@roman-khimov
Copy link
Member

Doing it per regular object implies splitting into many smaller parts plus parity, which can be done, but then there are questions:

  • one node can have many disks and we distribute objects per-node (can be mitigated by running multiple nodes per-machine)
  • perfect part-disk match can not be achieved, multiple parts can be written to the same node/disk
  • disk failure is object loss in this case and something has to recreate it
  • expanding the cluster can't affect old objects, it's not clear how new ones are gonna be changed

@roman-khimov
Copy link
Member

Can be inefficient for small (like 1K) objects, also.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Open discussion of some problem I1 High impact S1 Highly significant U4 Nothing urgent
Projects
None yet
Development

No branches or pull requests

4 participants