Skip to content

merkledb / sync -- remove TODOs #1718

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion x/merkledb/db.go
Original file line number Diff line number Diff line change
Expand Up @@ -566,7 +566,6 @@ func (db *merkleDB) GetChangeProof(
return i.Compare(j) < 0
})

// TODO: sync.pool these buffers
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this would be worth the added cognitive load

result.KeyChanges = make([]KeyChange, 0, len(changedKeys))

for _, key := range changedKeys {
Expand Down
5 changes: 0 additions & 5 deletions x/sync/manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ type workItem struct {
localRootID ids.ID
}

// TODO danlaine look into using a sync.Pool for workItems
func newWorkItem(localRootID ids.ID, start, end []byte, priority priority) *workItem {
return &workItem{
localRootID: localRootID,
Expand Down Expand Up @@ -190,10 +189,6 @@ func (m *Manager) sync(ctx context.Context) {
default:
m.processingWorkItems++
work := m.unprocessedWork.GetWork()
// TODO danlaine: We won't release [m.workLock] until
// we've started a goroutine for each available work item.
// We can't apply proofs we receive until we release [m.workLock].
// Is this OK? Is it possible we end up with too many goroutines?
Comment on lines -193 to -196
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was fixed a while back by the above case:

case m.processingWorkItems >= m.config.SimultaneousWorkLimit:
			// We're already processing the maximum number of work items.
			// Wait until one of them finishes.
			m.unprocessedWorkCond.Wait()

go m.doWork(ctx, work)
}
}
Expand Down
10 changes: 0 additions & 10 deletions x/sync/network_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -306,16 +306,6 @@ func (c *networkClient) Disconnected(_ context.Context, nodeID ids.NodeID) error
return nil
}

// Shutdown disconnects all peers
func (c *networkClient) Shutdown() {
c.lock.Lock()
defer c.lock.Unlock()

// reset peers
// TODO danlaine: should we call [Disconnected] on each peer?
c.peers = newPeerTracker(c.log)
}

func (c *networkClient) TrackBandwidth(nodeID ids.NodeID, bandwidth float64) {
c.lock.Lock()
defer c.lock.Unlock()
Expand Down