Skip to content
This repository was archived by the owner on Jun 27, 2023. It is now read-only.

Partial enumeration #97

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions hamt/hamt.go
Original file line number Diff line number Diff line change
Expand Up @@ -369,6 +369,9 @@ func makeAsyncTrieGetLinks(dagService ipld.DAGService, linkResults chan<- format
lnk := links[idx]
lnkLinkType, err := directoryShard.childLinkType(lnk)

// THIS IS THE CODE THAT FETCHES FROM NETWORK AND DISCARDS IT.
// It uses the fetched node just to follow the DAG.

if err != nil {
return nil, err
}
Expand Down Expand Up @@ -634,6 +637,7 @@ func (s *childer) get(ctx context.Context, sliceIndex int) (*Shard, error) {
return s.loadChild(ctx, sliceIndex)
}

// THIS IS THE CODE THAT FETCHES FROM NETWORK AND STORES IT.
// loadChild reads the i'th child node of this shard from disk and returns it
// as a 'child' interface
func (s *childer) loadChild(ctx context.Context, sliceIndex int) (*Shard, error) {
Expand Down
29 changes: 29 additions & 0 deletions hamt/partial-enum.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Partial enumeration

This a proposal to implement on top of https://github.com/ipfs/go-unixfs/pull/94 to allow the performance optimization of not having to enumerate *all* links in the HAMT directory when evaluating a possible downgrade/switch to the basic single-node directory.

Note on terminology: "partial" may not be the most apt term here. It is just used as a contrast with the *full* enumeration of `ForEachLink` and `EnumLinksAsync`. The exact nature of the enumeration proposed needs to be further elucidated in the proposal here.

*tl;dr*: keep fetched nodes from the enumeration in the `Shard` structure (as if they had been fetched as a consequence of an `AddChild`/`RemoveChild` call, which are retained throughout the lifetime of the structure/directory).

## Trade memory for bandwidth

The key insight of this proposal is the trade-off between memory (MEM) consumed by the `HAMTDirectory`/`Shard` and the bandwidth (BW) used to fetch the different nodes comprising the full `HAMTDirectory` DAG during enumeration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What memory are we trading off and why?

Aren't we just trying to minimize bandwidth by only fetching what we need and then we have the option of trading off BW vs latency by concurrently fetching more of the shards and perhaps overestimating the number we need to download?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The memory of holding the Shard structure in the childer.children instead of the links:

go-unixfs/hamt/hamt.go

Lines 540 to 541 in 12b9a86

links []*ipld.Link
children []*Shard


We want to minimize BW in the sense of not fetching *all* nodes at the same time but only as many needed to check if we are still above the transition threshold (`HAMTShardingSize`). It is not so much the *aggregated* BW we want to minimize (in the long run we may end up traversing the whole DAG) but the "burst" consumed in a single `Directory` API call when evaluating the transition (ideally we need only a number of nodes whose entries aggregated add up to `HAMTShardingSize`). This burst of enumeration then need not be full (all nodes) and hence we refer to partial enumeration. For this partial enumeration to be meaningful it needs to *not* overlap: we can't count the same node/entry twice toward the aggregated directory size. This means we need to track what was enumerated before: this is where we need to expense MEM.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We ideally also don't want to fetch data that shouldn't be needed. For example, minimizing the number of times when doing a modification while offline would result in some error block not found even when we have the relevant data locally


## HAMT/Shard structure

The most meaningful way to organize this MEM is to leverage the `hamt.Shard` implementation which already stores the fetched nodes from the network (`loadChild`) in its internal `childer` structure. When we modify a shard/node we fetch it, modify it and keep it in memory (for future edits). Independent of the previous, when we enumerate the links of the `HAMTDirectory` we walk the DAG fetching the same nodes but discard them afterward (using them transiently as the path to follow to get to the leaves with the directory entries/links). (Both codes are flagged as part of this PR.)

## Implementation

The proposed implementation change is rather straightforward then: modify `makeAsyncTrieGetLinks` to store the fetched nodes (likely unifying some of its logic with `loadChild`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stored where, in memory? The data is already stored on disk when we fetch it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, in memory. I'm not doing any assumptions about what is stored on disk under the DAG service interface (e.g., something getting GC or explicitly removed by other thread), should I?


We can then create a low-level internal function that fetches only a part of the DAG (canceling the context after a certain number of nodes are loaded; we don't really care which). After that we enumerate only loaded nodes (so only a partial DAG, but enough to check against `HAMTDirectory`). The "partial"term of this proposal is more crucially applied not to the enumeration itself but to the fetching process: we only fetch part of the DAG and then enumerate those nodes and compute their size.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can then create a low-level internal function that fetches only a part of the DAG

Is the order deterministic or non-deterministic? If non-deterministic then we'll end up doing more network requests than necessary since we won't utilize the cache of data we have on disk.

canceling the context after a certain number of nodes are loaded; we don't really care which

nit: cancelling contexts is generally less good than just aborting your function as you have to wait for the context cancellation to get checked.

After that we enumerate only loaded nodes

I'm not sure how annoying this is given the dagservice that you're handed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the order deterministic or non-deterministic?

Non-deterministic, what comes faster, from the PR:

fetches only a part of the DAG (canceling the context after a certain number of nodes are loaded; we don't really care which).


If non-deterministic then we'll end up doing more network requests than necessary since we won't utilize the cache of data we have on disk.

We are implicitly caching in memory by storing the retrieved Shards instead of discarding them as the cited function does. The cache is the structure itself (we can move it to the disk if your assumption above holds, but you'd still need to keep track of which shards you already have on disk, otherwise I'm missing something in your proposal, if so please correct me).


I'm not sure how annoying this is given the dagservice that you're handed.

They are already in memory in the childer structure, you wouldn't go through the DAG service.


(Technically the enumeration/traversal of the loaded DAG is full: we always start it from scratch. We could cache some traversed paths but at this iteration of the proposal the cost of CPU/MEM is negligent against the cost of BW we are trying to reduce.)

### Concurrency

For now the enumeration is blocking, as it has been before. While we fetch, enumerate and aggregate size no other operation is allowed on the `HAMTDirectory`. This is to make sure the transition is always in sync with the current size of the directory (and not with some old enumeration that happened before). To be more performant we should be fetching in parallel with other directory operations since the start of the directory load (this assumes *every* time the user loads a directory it will eventually request its root node/CID to store a modified copy in the DAG service). This alternative would require more careful design and the addition of a locking system that is beyond the scope of this proposal.