Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: lakeFS hard-delete #4015

Merged
merged 11 commits into from
Oct 23, 2022
Merged

Proposal: lakeFS hard-delete #4015

merged 11 commits into from
Oct 23, 2022

Conversation

N-o-Z
Copy link
Member

@N-o-Z N-o-Z commented Aug 28, 2022

Closes #1933 (but not really...)

Design proposals document for in-house hard-delete in lakeFS

The

@N-o-Z N-o-Z added the proposal label Aug 28, 2022
@N-o-Z N-o-Z self-assigned this Aug 28, 2022
@N-o-Z N-o-Z added the exclude-changelog PR description should not be included in next release changelog label Aug 28, 2022
@N-o-Z
Copy link
Member Author

N-o-Z commented Aug 28, 2022

More or less hit a brick wall with this 😅
Please see the summary for all the proposal attempts, please share if you have any ideas

Copy link
Contributor

@arielshaqed arielshaqed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need some more detail on the precise sequence of staging. I think delete object is actually the hardest operation to get right.

The following assumptions are required in all the suggested proposals:
1. Copy operation requires to be changed and implemented is full copy.
2. lakeFS is not responsible for retention of data outside the repository path (data ingest)
3. An online solution is not bulletproof, and will require a complementary external solution to handle edge cases
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also for backfill for existing installations.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

### Design

Objects will be stored in a path relative to the branch and staging token. For example: file `x/y.z` will be uploaded to path
`<bucket-name>/<repo-name>/<branch-name>/<staging-token>/x/y.z`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've previously considered this and similar proposals. The blocker has always been how to stage tombstones for deletion of objects (we never figured out how to do it; that does not mean it is impossible!). So we have to detail how deletion works in this proposal.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't come up with any sufficient solution to this problem. The only option I thought of is adding a metadata entry on the object which marks it as deleted

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is an important issue to resolve. (I know it's hard because AFAIR we've had several rounds on it.)

Adding metadata doesn't work because then you need to get actual data for the object to know how to handle it. E.g. think of listing, suddenly it has to HEAD every object in the listing. So your only options are to use data available in ListObjectsV2, where every object is represented by an Object. AFAICT your only chance there is an ETag -- if you declared some sequence of bytes "The Deleted Object" then you could compute its ETag and try to upload that whenever you wanted to delete an object. I am not a fan of having fixed "forbidden" strings, however -- they're exactly the kind of thing people like me end up wanting to store inside actual objects.

1. Start write on staged object
2. Start Commit
3. Commit flow writes current object to committed tables
4. Write to object is completed - changing the data after it was already committed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know I was among those to voice this concern, but I am not sure that it is warranted. At some point after writing the staged object the writer has to stage it. It will discover that its staging token is no longer valid. Now it needs to retry by copying the object over to the new staging token.

AFAIU this gives:

  • Incorrect metadata: if I upload twice, I can end up with the new object data but the old object metadata.
  • Correct data, at the price of repeated copy operations when writing with concurrent commits. (We would hope that S3 copy will be faster than S3 upload.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the set operation will happen regardless if the upload was succeeded the first time. It means that we write to the staging token (in this case - overwrite the object on the physical address). When the writer discovers the staging token has changed, we simply retry the operation with the new staging token. The new data will be written to both the old token and the new token. This is something we couldn't quite find a way to deal with in the new KV design.

4. Write to object is completed - changing the data after it was already committed

### Opens
1. Solve the blocker - how to prevent data modification after commit?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICT this is actually correct for data. However we might have incorrect metadata for the commit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's just a different perspective to the same problem 😄

Comment on lines 89 to 93
2. Reference counter exists:
1. if counter == 1
1. Hard-delete object
2. if counter > 1
1. Decrement counter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unsafe, you need to decrement-and-compare atomically. Otherwise we might end up deleting the object in the face of a concurrent LinkPhysicalAddress.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True - another option:

  1. Decrement counter
  2. if counter == 0 -> hard delete

In all other flows - read counter first, if counter < 1 - treat file as deleted.
WDYT?


On Upload object to a branch, we will add a key in the references path with the physical address and current staging token and
mark it as staged.
On Delete object from a staging area, we will update the reference key with value `deleted`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you will still need a reference counter (fortunately that's possible on KV!). Consider concurrently re-linking a staged object and deleting its old name.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the reference counting is inferred with this solution, it is an aggregation of all the references of a physical address which are not in state deleted

1. Scan `reference` prefix (can use after prefix for load balancing)
2. For each `physical_address` read all entries
1. If found state == 'committed' in any entry
1. Delete all keys for `phisycal_address`, by order of state: deleted -> staged -> committed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems worrying:

  1. I commit an object at physical path abcd during staging stg1. So I have committed at references/abcd/stg1.
  2. Concurrently:
    • Background delete runs and deletes this entry (the intent is to keep abcd, it was committed).
    • A copy operation stages abcd into staging stg2
      After both of these, I can have staged at references/abcd/stg2 and nothing (no key!) at references/abcd/stg1.
  3. Delete this object. Now references/abcd/stg2 is deleted.
  4. The next background delete deletes abcd, despite it being referenced by a commit.

Copy link
Member Author

@N-o-Z N-o-Z Aug 29, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Background delete runs and deletes this entry (the intent is to keep abcd, it was committed).

This will not happen since we will not delete references which have at least one entry with state committed

Additionally, something that was discussed and was omitted from the design (will add it), is that the background delete operation is time based. i.e. a candidate for delete is a physical address which all of its references are in state deleted and the last modified timestamp is bigger than 'x' (for example a day).
This ensures us that this physical address is no longer being in use or have a chance to be referenced again.


### Design

Objects will be stored in a path relative to the branch and staging token. For example: file `x/y.z` will be uploaded to path
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's possible to save it on the KV store instead under the same prefix, it can help make the flow commit atomic

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Saving it in the KV store will require changing the way we manage uncommitted data. Currently entries are saved per staging token. What you suggest requires saving them per physical address. This will break the current staging flows, and will incur a lot of overhead when reading from staging area

Store the reference counter in the blob's metadata, use the `patch object` API with the `ifMetagenerationMatch` conditional.


## 3. Tracking references in staging manager
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is base assumption 1 valid here? I don't think we need to change the way we copy in this solution.

Copy link
Member Author

@N-o-Z N-o-Z Sep 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, deeper examination of the copy object flow show's this assumption is irrelevant for option 3

1. A batch operation performed as part of an external process (GC)
2. An online solution inside lakeFS

**This document details the latter.**
Copy link
Contributor

@itaidavid itaidavid Sep 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be missing some context here, but what are the reasons to not consider an offline GC?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Offline GC is also considered. We want to exhaust the options of implementing an online solution.
This document discusses only the online solutions

2. `deleted` state can only be done on entries which are not `committed` and uses **SetIf**
3. `staged` state can only be done on entries which are not `committed` and uses **SetIf**

### Flows
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about branch deletion, when committed objects are left unreferenced?
Does the ref count cover it too?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, on branch deletion, and as part of the staging token drop, we scan over token entries,
if entry was tombstone - we delete the reference entry, otherwise we modify the reference state to deleted

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same as reset branch

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I follow on this one - I'm asking about a branch with all data commited - no objects at staging are for that matter, and say a certain object is only referenced by this branch. What happens when the branch is deleted? The (now) unreferenced object is not accessible via any of staging tokens, as these are gone

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not track committed data references, once data is committed the reference keys will eventually be deleted (only the reference keys, not the data) by the background process. Data which is committed is referenced by its commit, not its branch. By design we do not deleted these objects. This proposal discusses only the garbage collection on uncommitted data. For committed data we have the external GC process retention policy. I hope I answered what was asked

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually you did. Thanks

Copy link
Contributor

@itaidavid itaidavid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great coverage of these options and mainley, and most importantly, of their imperfections.
Some questions inside

@N-o-Z N-o-Z force-pushed the proposal/hard-delete-uncommitted branch 2 times, most recently from 71c4f60 to a959506 Compare September 7, 2022 15:11
#### LinkPhysicalAddress

Assume this API uses a physical address in the repo namespace
1. Add reference entry to the database
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if there's a commit pointing to that physical address?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition to the address we will provide on GetPhysicalAddress a validation token which will have also an expiry time (corresponding with the retention policy of the garbage collection). On GetPhysicalAddress we will create a reference entry with state deleted.
LinkPhysicalAddress will check the token validity, if token is valid - create entry and add reference for physical address with state staged.
As part of the changes - I'm starting to think we should drop support altogether for StageObject

4. Update branch commit ID

#### Reset / Delete Branch
When resetting a branch we throw all the uncommitted branch data, this operation happens asynchronously via `AsyncDrop`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AsyncDrop is called after a successful commit. It cannot delete the objects

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not to go into implementation details, there's a very simple way to modify this function to take into consideration the scenario from which it was called


1. If object staged
1. Read reference key
2. If not `committed`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can it be committed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

race between Commit and StageObject


#### Commit

1. Mark all entries in staging area as `committed`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That takes some time and not easily revertable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't say that it is not easily revertible, but it is a process that requires to iterate over all the staged entries and update the reference state.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to check further but it might be possible to perform the marking during the iteration over the changes as part of the commit process - this should save us some time

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a different partition. You would be replacing 2 iterations over the staging area with a single iterations that does 2 things. I don't think it would optimize the runtime.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or the ability to revert easily.

Copy link
Contributor

@arielshaqed arielshaqed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Historically being able to represent deleted objects in staging has been the breaker for many similar proposals. I think that we should definitively resolve the representation of deleted objects before proceeding.

That said, I am not sure why "online hard-delete" has to depend on "no-KV staging contents" -- an alternate way forward might be to find a formulation that does not depend on keeping the staging catalogue on S3.

### Design

Objects will be stored in a path relative to the branch and staging token. For example: file `x/y.z` will be uploaded to path
`<bucket-name>/<repo-name>/<branch-name>/<staging-token>/x/y.z`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is an important issue to resolve. (I know it's hard because AFAIR we've had several rounds on it.)

Adding metadata doesn't work because then you need to get actual data for the object to know how to handle it. E.g. think of listing, suddenly it has to HEAD every object in the listing. So your only options are to use data available in ListObjectsV2, where every object is represented by an Object. AFAICT your only chance there is an ETag -- if you declared some sequence of bytes "The Deleted Object" then you could compute its ETag and try to upload that whenever you wanted to delete an object. I am not a fan of having fixed "forbidden" strings, however -- they're exactly the kind of thing people like me end up wanting to store inside actual objects.

<repo>/staging/deleted/<staging_token>
<repo>/staging/committed/<commit_id> - list(staging tokens)

### Flows
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the Commit flow?


* Upload will add reference to physical address as before
* Delete object will not remove the reference
* References to objects are kept as long as staging token is still 'active'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please define active


## 4. Staging Token States

Track state on the staging token instead of objects - but keep tracking references (without state)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand - now we only keep a reference entry, for staged?

N-o-Z and others added 6 commits September 15, 2022 08:47
Co-authored-by: itai-david <90712874+itai-david@users.noreply.github.com>
Co-authored-by: itaiad200 <itaiad200@gmail.com>
@N-o-Z N-o-Z force-pushed the proposal/hard-delete-uncommitted branch from a38f79e to 5faae53 Compare September 15, 2022 05:47
Copy link
Contributor

@arielshaqed arielshaqed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

IIUC we are only considering option 4 (and maybe 5). Can we remove the other options (and rephrase 4 without 3)? I think it would help clarify the proposal.

Comment on lines 237 to 238
key = <repo>/staging/committed/<physical_address>
value = list(staging tokens)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand: We keep every committed physical address? Isn't this going to be huge?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eventually committed references will be deleted by the background job


#### Stage Object
Allow stage object operations only on addresses outside the repository namespace
lakeFS will not manage the retention of these objects
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICT the current LakeFSFileSystem uses stageObject for uploads. Does this mean that users will have to upgrade all old clients?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link you provided and what I saw myself (though not savvy with our Java code) is for renameObject (move).
For that purpose we should use the CopyObject instead.
Short answer: Yes - if and when these changes are implemented, the old client's renameObject will no longer work

@N-o-Z N-o-Z added the team/versioning-engine Team versioning engine label Sep 28, 2022

#### Move operation
1. Get entry
2. "Lock" entry (`SetIf`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is "Lock"? How is it implemented, how is it released? How is it treated by other operations?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added more information about the lock. Operations which involve the locking mechanism are explicitly addressed.Operations that are not mentioned are agnostic to the lock. (If you think I missed any please tell)


### Flows

#### Move operation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's optional can we start without?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIU this is a requirement. The move operation was created to support a main flow in our Spark client which currently is being performed on the metadata level (by using copy + delete).
As per @arielshaqed, this is something that we do not want to support by a full copy due to the performance implications

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that Spark lakeFSFS and Hadoop S3A do not exactly require a "move". We currently use LinkPhysicalAddress followed by a delete. So it isn't atomic, but at least it requires only metadata operations.

Comment on lines 348 to 352
1. Get entry
2. "Lock" entry if exists (override scenario) (`SetIf`)
3. Write blob
4. Add staged entry (or update)
5. Delete physical address if previously existed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the physical path is predetermined by the logical path and the staging token, why do we need to delete anything?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not - it was never suggested to couple the logical and physical path in this proposal (#5). We only ensure we have a single logical address for each physical address

#### 4. Stale lock problem
We use the lock mechanism to protect from race scenarios on Upload (override), Move and Delete. As such, an entry with a stale
lock will prevent us from performing any of these operations.
We can overcome it with a time based lock - but this might present additional challenges to the proposed solution.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can we not have it? Can we live with a system that some entries are locked for good?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO this is not something that we can live with. But - we need to take into account that using a time based lock might introduce new problems

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So let's break it down then

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I see it - we should first try to solve issue #1 (Commit - Move Race), before taking the time to solve this one.
Seems like the commit move race is our blocker.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Time based locks are inherently unsafe unless your infrastructure has fairly strict guarantees on clocks. We need to ensure that at least AWS + EKS provide such a guarantee.

3. Delete staging entry
4. Hard delete object

### Races and issues
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not support move and avoid all races? 👿

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See previous comment regarding Move

Copy link
Contributor

@arielshaqed arielshaqed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

IIUC we've ruled out options 1-4. Can we perhaps get rid of them? It would make it easier (for me) to focus. 🔍

#### 4. Stale lock problem
We use the lock mechanism to protect from race scenarios on Upload (override), Move and Delete. As such, an entry with a stale
lock will prevent us from performing any of these operations.
We can overcome it with a time based lock - but this might present additional challenges to the proposed solution.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Time based locks are inherently unsafe unless your infrastructure has fairly strict guarantees on clocks. We need to ensure that at least AWS + EKS provide such a guarantee.


### Flows

#### Move operation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that Spark lakeFSFS and Hadoop S3A do not exactly require a "move". We currently use LinkPhysicalAddress followed by a delete. So it isn't atomic, but at least it requires only metadata operations.

@N-o-Z
Copy link
Member Author

N-o-Z commented Oct 2, 2022

Thanks!

IIUC we've ruled out options 1-4. Can we perhaps get rid of them? It would make it easier (for me) to focus. mag

Thanks @arielshaqed
I feel that as long as we didn't settle on any proposal, we should keep all of the proposals information. In case this will be eventually rejected we will benefit from having documentation of all the attempted solutions for future reference.

@wengchenyang1
Copy link

Thanks!
IIUC we've ruled out options 1-4. Can we perhaps get rid of them? It would make it easier (for me) to focus. mag

Thanks @arielshaqed I feel that as long as we didn't settle on any proposal, we should keep all of the proposals information. In case this will be eventually rejected we will benefit from having documentation of all the attempted solutions for future reference.

Hello @N-o-Z , may I ask for the progress of proposal 5? We face compliance issue because some files are not hard deleted on s3.

@N-o-Z
Copy link
Member Author

N-o-Z commented Oct 17, 2022

@wengchenyang1,
All the suggested proposals in this document have inherent faults which make non of them a viable solution. We've decided for the time being to work in other directions for this problem.
We''re currently working on designing an offline solution similar to the current GC process we have.
I'll make sure to tag you once the PR is opened

@itaiad200
Copy link
Contributor

@N-o-Z can we merge this proposal to rejected?

@N-o-Z
Copy link
Member Author

N-o-Z commented Oct 23, 2022

@N-o-Z can we merge this proposal to rejected?

Yes - lets do that

@N-o-Z N-o-Z dismissed stale reviews from itaiad200, itaidavid, and arielshaqed October 23, 2022 08:11

Moving proposal to rejected

@N-o-Z N-o-Z requested a review from itaiad200 October 23, 2022 08:12
Copy link
Contributor

@itaiad200 itaiad200 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although this was rejected, the hard work on this suggestion proved us that an online consistent, atomic & maintainable solution to this problem is not feasible. This output is better than implementing a design that cannot work :)

@N-o-Z
Copy link
Member Author

N-o-Z commented Oct 23, 2022

Although this was rejected, the hard work on this suggestion proved us that an online consistent, atomic & maintainable solution to this problem is not feasible. This output is better than implementing a design that cannot work :)

@itaiad200, I still want to believe that in the future we will be able to find a working solution for this problem

@N-o-Z N-o-Z merged commit 5a93a62 into master Oct 23, 2022
@N-o-Z N-o-Z deleted the proposal/hard-delete-uncommitted branch October 23, 2022 09:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exclude-changelog PR description should not be included in next release changelog proposal team/versioning-engine Team versioning engine
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Hard-delete objects that were never committed
7 participants