Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: chainstore: remove locking in index cache #10866

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 6 additions & 8 deletions chain/store/index.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ import (
"context"
"os"
"strconv"
"sync"

lru "github.com/hashicorp/golang-lru/v2"
"golang.org/x/xerrors"

"github.com/filecoin-project/go-state-types/abi"
Expand All @@ -27,8 +27,7 @@ func init() {
}

type ChainIndex struct {
indexCacheLk sync.Mutex
indexCache map[types.TipSetKey]*lbEntry
indexCache *lru.ARCCache[types.TipSetKey, *lbEntry]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We initially had an ARCCache here which was dropped in favor of a simple map with a mutex for the reasons pointed here: #10423 (review)
I think we can keep the mutex but reduce the size of critical section covered by the mutex

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be replaced by a simple LRU cache if that is a concern.


loadTipSet loadTipSetFunc

Expand All @@ -37,8 +36,9 @@ type ChainIndex struct {
type loadTipSetFunc func(context.Context, types.TipSetKey) (*types.TipSet, error)

func NewChainIndex(lts loadTipSetFunc) *ChainIndex {
sc, _ := lru.NewARC[types.TipSetKey, *lbEntry](DefaultChainIndexCacheSize)
return &ChainIndex{
indexCache: make(map[types.TipSetKey]*lbEntry, DefaultChainIndexCacheSize),
indexCache: sc,
loadTipSet: lts,
skipLength: 20,
}
Expand All @@ -59,11 +59,9 @@ func (ci *ChainIndex) GetTipsetByHeight(ctx context.Context, from *types.TipSet,
return nil, xerrors.Errorf("failed to round down: %w", err)
}

ci.indexCacheLk.Lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to have a transactional / consistent view of indexCache during this loop? Can there be other actors changing indexCache while this loop is running that will create unintended behavior?

fillCache has a comment that we want to be holding indexCacheLk during the call -- https://github.com/filecoin-project/lotus/blob/master/chain/store/index.go#L99

There's a lot going on in these loops. I think we just need to make sure that there's always a consistent view of the data

defer ci.indexCacheLk.Unlock()
cur := rounded.Key()
for {
lbe, ok := ci.indexCache[cur]
lbe, ok := ci.indexCache.Get(cur)
if !ok {
fc, err := ci.fillCache(ctx, cur)
if err != nil {
Expand Down Expand Up @@ -137,7 +135,7 @@ func (ci *ChainIndex) fillCache(ctx context.Context, tsk types.TipSetKey) (*lbEn
targetHeight: skipTarget.Height(),
target: skipTarget.Key(),
}
ci.indexCache[tsk] = lbe
ci.indexCache.Add(tsk, lbe)

return lbe, nil
}
Expand Down