-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generalize expiry based de-duplication, dsmr #1810
base: main
Are you sure you want to change the base?
Conversation
All sounds like great ideas, thanks!
…On Mon, Nov 25, 2024, 1:27 PM aaronbuchwald ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In x/dsmr/node.go
<#1810 (comment)>:
> @@ -172,19 +193,28 @@ func (n *Node[T]) BuildChunk(
return chunk, n.storage.AddLocalChunkWithCert(chunk, &chunkCert)
}
-func (n *Node[T]) BuildBlock(parent Block, timestamp int64) (Block, error) {
- if timestamp <= parent.Timestamp {
+// BuildBlock(ctx context.Context, parentView state.View, parent *ExecutionBlock) (*ExecutionBlock, *ExecutedBlock, merkledb.View, error)
can we remove this comment?
------------------------------
In x/dsmr/node_test.go
<#1810 (comment)>:
> +func newTestingLog() logging.Logger {
+ return logging.NewLogger("dsmr")
+}
we can replacing this with logging.NoLog{}
------------------------------
In x/dsmr/node_test.go
<#1810 (comment)>:
> +func newTestingTracer(t *testing.T) trace.Tracer {
+ tracer, err := trace.New(trace.Config{})
+ require.NoError(t, err)
+ return tracer
+}
we can replace this with trace.Noop
------------------------------
In x/dsmr/node_test.go
<#1810 (comment)>:
> +type testingChainIndex struct{}
+
+func (*testingChainIndex) GetExecutionBlock(context.Context, ids.ID) (validitywindow.ExecutionBlock[*ChunkCertificate], error) {
+ return nil, nil
+}
+
+func newChainIndexer() ChainIndex {
+ return &testingChainIndex{}
+}
We should add unit tests for the newly added functionality here. What do
you think of the following cases?
- skip duplicate chunks in build block
- return an error if all available chunks are duplicates in build block
- duplicate chunk within a block
- duplicate chunk that between the block we are executing and the last
accepted block
- duplicate chunk inside of the accepted validity window
- propagate an error if we fail to fetch a block during de-duplication
—
Reply to this email directly, view it on GitHub
<#1810 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF2OOH6AKK7BDRFH2ESZGB32CNT2VAVCNFSM6AAAAABSJTMZI2VHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDINJZGMZDKMZVGM>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
// make sure we have no repeats within the block itself. | ||
blkTxsIDs := make(map[ids.ID]bool, len(blk.Txs())) | ||
for _, tx := range blk.Txs() { | ||
id := tx.GetID() | ||
if _, has := blkTxsIDs[id]; has { | ||
return fmt.Errorf("%w: duplicate in block", ErrDuplicateContainer) | ||
} | ||
blkTxsIDs[id] = true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do this check twice now in chain since we previously handled this within NewExecutionBlock
We should only apply the check once, which do you think is the better place for it? The other addition within NewExecutionBlock
is pre-populating the signature job. It would probably be better to remove that from the execution block and handle it in AsyncVerify
. One good reason to do this is that handling this within ParseBlock
can be a DoS vector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see why testing this twice is an issue on it's own, and I think that I have a better solution here;
Testing it in NewExecutionBlock is not ideal from my perspective, as it should be construction step ( i.e. no errors ).
How do you feel about the following:
in ExecutionBlock, we would add the following function:
func (b *ExecutionBlock) ValidateDuplicateTransactions() error {
if len(b.Txs) != b.txsSet.Len() {
ErrDuplicateTx
}
return nil
than in validitywindow.go VerifyExpiryReplayProtection
, we can just call this method:
...
if err := blk.ValidateDuplicateTransactions(); err != nil {
return err
}
...
and remove the test from NewExecutionBlock
x/dsmr/node.go
Outdated
if block.Tmstmp <= parentBlock.Tmstmp && parentBlock.Hght > 0 { | ||
return ErrTimestampNotMonotonicallyIncreasing | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we remove this check from this PR?
We should enforce timestamp >= parent timestamp (should allow them to be equal in case a block builder pushes the timestamp ahead of wall clock time for some nodes.
We should not allow the case that a malicious node builds a block with a timestamp that we consider valid less than 1s ahead of our wall clock time, but still ahead of our wall clock time, such that when we build a block on top of it, we fail because the current timestamp is ahead of our local timestamp.
We should update the check applied in BuildBlock
imo.
We should also never execute the genesis block, so the check for parentBlock.Hght > 0
should be removed.
x/dsmr/node_test.go
Outdated
if blk, has := ti.blocks[id]; has { | ||
return blk, nil | ||
} | ||
return nil, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fetching a non-existing block should return an error, can we return database.ErrNotFound
if the block is not available?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the ux of us using the sentinel database.ErrNotFound
experience to be very awkward. This is due to the existing interface definition and not the changes in this PR though.... if we had something like GetExecutionBlock() (ExecutionBlock[T], bool, error)
we wouldn't have to introspect the error and could just return an error if error
is non-nil and check the bool
if it just wasn't there.
@@ -69,6 +70,15 @@ func (v *TimeValidityWindow[Container]) VerifyExpiryReplayProtection( | |||
if dup.Len() > 0 { | |||
return fmt.Errorf("%w: duplicate in ancestry", ErrDuplicateContainer) | |||
} | |||
// make sure we have no repeats within the block itself. | |||
blkTxsIDs := make(map[ids.ID]bool, len(blk.Txs())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use set.Set
here?
x/dsmr/certificate.go
Outdated
func (c ChunkCertificate) GetExpiry() int64 { return c.Expiry } | ||
|
||
func (c *ChunkCertificate) GetSlot() int64 { return c.Expiry } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these are duplicated - GetSlot
is meant to be Expiry
x/dsmr/node.go
Outdated
log logging.Logger, | ||
tracer trace.Tracer, | ||
chainIndex ChainIndex, | ||
validityWindowDuration int64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use time.Duration
here?
x/dsmr/node.go
Outdated
oldestAllowed := timestamp - n.validityWindowDuration | ||
if oldestAllowed < 0 { | ||
oldestAllowed = 0 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: min(0, oldestAllowed)
x/dsmr/node_test.go
Outdated
blocks map[ids.ID]validitywindow.ExecutionBlock[*ChunkCertificate] | ||
} | ||
|
||
func (ti *testingChainIndex) GetExecutionBlock(_ context.Context, id ids.ID) (validitywindow.ExecutionBlock[*ChunkCertificate], error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nits:
- receiver name should just be
t
id
->blkID
orblockID
x/dsmr/block.go
Outdated
func (b Block) GetID() ids.ID { | ||
return b.blkID | ||
} | ||
|
||
func (b Block) Parent() ids.ID { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't care which we pick, but we should be consistent on the naming of either Foo()
or GetFoo()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to dodge this one by saying that this method is no longer needed.
x/dsmr/block.go
Outdated
func (b Block) Txs() []*ChunkCertificate { | ||
return b.ChunkCerts | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems weird because these are not returning txs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I agree. let's discuss this as a group ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use the term containers?
x/dsmr/node.go
Outdated
log logging.Logger, | ||
tracer trace.Tracer, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: make these (logger + tracer) the first args in this fn
x/dsmr/node.go
Outdated
type ( | ||
ChainIndex = validitywindow.ChainIndex[*ChunkCertificate] | ||
timeValidityWindow = *validitywindow.TimeValidityWindow[*ChunkCertificate] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like we're doing this to avoid the caller depending on an internal package. I'm wondering if it even makes sense for validitywindow
to be in internal at all... should this just be merged into dsmr
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aaronbuchwald asked to remove this section completely.
x/dsmr/block.go
Outdated
"github.com/ava-labs/hypersdk/utils" | ||
) | ||
|
||
const InitialChunkSize = 250 * 1024 | ||
|
||
type Tx interface { | ||
GetID() ids.ID | ||
GetExpiry() int64 | ||
emap.Item |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This exposes the internal/emap
package into the caller. I think the previous wrapping pattern where we wrapped this interface w/ a type that implemented the emap interface actually looked cleaner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
once merged, we won't need wrapping interface anymore. . unless I'm missing something ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @joshua-kim 's point is to have the internal type implemented as makes sense in this package and then wrap it with another type that converts between that structure and the required interface when we need to use it in the validity window or expiry map where we need a specific interface. Correct me if I'm wrong @joshua-kim
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
x/dsmr/block.go
Outdated
return e.innerBlock.ParentID | ||
} | ||
|
||
func (e ExecutionBlock) Txs() []*emapChunkCertificate { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we rename this to a neutral name instead of txs like containers?
blkTxsIDs := set.NewSet[ids.ID](len(blk.Containers())) | ||
for _, tx := range blk.Containers() { | ||
id := tx.GetID() | ||
if blkTxsIDs.Contains(id) { | ||
return fmt.Errorf("%w: duplicate in block", ErrDuplicateContainer) | ||
} | ||
blkTxsIDs.Add(id) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we rename the local variables here?
@@ -110,7 +121,7 @@ func (v *TimeValidityWindow[Container]) isRepeat( | |||
if marker.Contains(i) { | |||
continue | |||
} | |||
if ancestorBlk.ContainsTx(tx.GetID()) { | |||
if ancestorBlk.Contains(tx.GetID()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment on tx.GetID()
here
x/dsmr/block.go
Outdated
func (b Block) init() { | ||
for _, c := range b.ChunkCerts { | ||
b.certSet.Add(c.ChunkID) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we move the cert set to the execution block type?
x/dsmr/node_test.go
Outdated
_, err := node.BuildBlock(context.Background(), parentBlk, testCase.timestamp) | ||
r.ErrorIs(err, testCase.buildWantErr) | ||
|
||
r.ErrorIs(node.Verify(context.Background(), parentBlk, newBlk), testCase.verifyWantErr) | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test case looks like it should pass because calling Accept
should mark the block as accepted and make sure it's part of the index.
x/dsmr/node_test.go
Outdated
} | ||
r.NoError(node.Accept(context.Background(), blk)) | ||
|
||
r.NoError(node.storage.AddLocalChunkWithCert(chunk, &chunkCert)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with @joshua-kim here that it does not make sense to test this way since a user of this package should never call AddLocalChunkWithCert
directly.
The way we'd hit this case is by attempting to build a block with a chunk that is in our pool, but has not been included in an accepted block yet (and therefore has not been removed from that pool).
x/dsmr/node_test.go
Outdated
nodes, indexer := newNodes(t, 1) | ||
node = nodes[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure the validity window should be separate from the nodes, since each node maintains its own index
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not the validity window itself, but rather the validity window duration. The index remains the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The indexer is usually part of the node itself so sharing it across each node in tests seems like an anti-pattern
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could create an indexer per node, if you think it would be better.. but it would be a bit of over engineering solution given that it's being used to retrieve previously accepted blocks. Please let me know if you think it's the right move and I'll make it so.
x/dsmr/node_test.go
Outdated
context.Background(), | ||
[]tx{ | ||
{ | ||
ID: ids.Empty, // ids.GenerateTestID(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this meant to be using ids.GenerateTestID()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no; but good catch. It needed to be a way to way to reproduce consistent duplicate chunks. I'd updated the code to use ids.Empty.Prefix(uint64(chunkExpiry))
x/dsmr/node_test.go
Outdated
parentBlk = blk | ||
} | ||
|
||
// create the block so that we can test it against the execute directly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// create the block so that we can test it against the execute directly. | |
// create the block so that we can test it against Execute directly. |
x/dsmr/node_test.go
Outdated
_, chunkCert, err := node.BuildChunk( | ||
context.Background(), | ||
[]tx{ | ||
{ | ||
ID: ids.Empty, | ||
Expiry: int64(chunkExpiry), | ||
}, | ||
}, | ||
int64(chunkExpiry), | ||
codec.Address{}, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we generate actual IDs instead of ids.Empty
since these will technically be invalid?
x/dsmr/node_test.go
Outdated
_, err := node.BuildBlock(context.Background(), parentBlk, testCase.timestamp) | ||
r.ErrorIs(err, testCase.buildWantErr) | ||
|
||
r.ErrorIs(node.Verify(context.Background(), parentBlk, newBlk), testCase.verifyWantErr) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If BuildBlock
succeeds, it seems that we are expecting it to build the same block that we have produced locally. Should we be checking that they're the same here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the BuildBlock
won't return any of the expected errors, than it would return a valid block. However, I'm not sure how easy it would be to compare it to the testing block we've created. I'll add ParentID, Height and Timestamp.
@@ -169,6 +197,38 @@ func TestNode_GetChunk_AvailableChunk(t *testing.T) { | |||
<-done | |||
} | |||
|
|||
func TestIndexerMissingBlock(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this a test over the duplicate chunk case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no; after
r.NoError(node.Verify(context.Background(), node.LastAccepted, blk))
r.NoError(node.Accept(context.Background(), blk))
I intentionally did not add the newly accepted block to the indexer.
As a result, when
r.ErrorIs(node.Verify(context.Background(), node.LastAccepted, blkNext), database.ErrNotFound)
is called, it wouldn't have that block in the indexer, and would generate the database.ErrNotFound
error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as far as I can tell, this is the only test that we have that cover this case.
( obviously, it shouldn't happen on any real VM that index all it's accepted blocks )
blkNext, err := node.BuildBlock(context.Background(), node.LastAccepted, 4) | ||
r.NoError(err) | ||
|
||
r.ErrorIs(node.Verify(context.Background(), node.LastAccepted, blkNext), database.ErrNotFound) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we getting database.ErrNotFound
here? It seems to me like this would fail verification because this block references a chunk that contains duplicated txs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it wouldn't be the same transactions since ids.GenerateTestID()
is random.
x/dsmr/block.go
Outdated
@@ -105,6 +106,47 @@ func ParseChunk[T Tx](chunkBytes []byte) (Chunk[T], error) { | |||
return c, c.init() | |||
} | |||
|
|||
// ExecutionBlock bridge the gap between the dsmr's block implementation and the validity window's execution block interface. | |||
type ExecutionBlock struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I think we should unexport this type since it's not needed downstream of dsmr, since we're only using this type within the validity window.
- Can we call this something like
validityWindowBlock
? We're going to be introducing anotherExecutedBlock
type which will be too similar to this type name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, wasn't aware of that, and would gladly rename the data structure.
x/dsmr/node_test.go
Outdated
|
||
chainID = ids.Empty | ||
) | ||
|
||
type testValidityWindowChainIndex struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: move all testing types to bottom of file
x/dsmr/node_test.go
Outdated
Height: 1, | ||
Timestamp: 1, | ||
blkID: ids.GenerateTestID(), | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block isn't valid, I think we need to call BuildChunk -> BuildBlock -> Verify
before calling Accept
. As an unrelated note I don't think we need this block at all for this test.
x/dsmr/node_test.go
Outdated
} | ||
|
||
return result | ||
return result, indexer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want this indexer to be shared across all node instances?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer not since the "indexer" is unique per node in practice
x/dsmr/node_test.go
Outdated
} | ||
r.NoError(node.Accept(context.Background(), blk)) | ||
|
||
_, err = node.BuildBlock(context.Background(), blk, 3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Use node.LastAccepted
instead of referencing blk
for the parent to build off of
// make sure that it's not the case with any other chunk. | ||
_, _, err = node.BuildChunk( | ||
context.Background(), | ||
[]tx{ | ||
{ | ||
ID: ids.GenerateTestID(), | ||
Expiry: 4, | ||
}, | ||
}, | ||
4, | ||
codec.Address{}, | ||
) | ||
r.NoError(err) | ||
r.NoError(node.Accept(context.Background(), blk)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like instead of having a single test case that checks:
- Build a duplicate chunk, have it fail, and build a new chunk and see if it passes
It would just make more sense to have this split into separate test cases:
- Build a Chunk, it should pass
- Build a duplicate chunk, it should fail
- Build a chunk, build a duplicate chunk, should pass with a single chunk in the block
x/dsmr/node_test.go
Outdated
blk := Block{ | ||
ParentID: parentBlk.GetID(), | ||
Height: uint64(int(node.LastAccepted.Height) + 1), | ||
Timestamp: int64(int(node.LastAccepted.Timestamp) + 1), | ||
blkID: ids.GenerateTestID(), | ||
} | ||
|
||
for _, chunkExpiry := range chunkList { | ||
_, chunkCert, err := node.BuildChunk( | ||
context.Background(), | ||
[]tx{ | ||
{ | ||
ID: ids.Empty.Prefix(uint64(chunkExpiry)), | ||
Expiry: int64(chunkExpiry), | ||
}, | ||
}, | ||
int64(chunkExpiry), | ||
codec.Address{}, | ||
) | ||
r.NoError(err) | ||
blk.ChunkCerts = append(blk.ChunkCerts, &chunkCert) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use BuildBlock
after calling BuildChunk
instead of initializing the block through the struct?
x/dsmr/node_test.go
Outdated
r.NoError(node.Verify(context.Background(), parentBlk, blk)) | ||
|
||
r.NoError(node.Accept(context.Background(), blk)) | ||
indexer.set(blk.GetID(), NewExecutionBlock(blk)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should dsmr be updating the chain index? Or are we expecting that someone else is responsible for updating it (current behavior)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The snow refactor separates the chain index from Accept
, so it's reasonable imo to do the same thing here imo
x/dsmr/node_test.go
Outdated
nodes, indexer := newNodes(t, 1) | ||
node = nodes[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The indexer is usually part of the node itself so sharing it across each node in tests seems like an anti-pattern
x/dsmr/node_test.go
Outdated
var node *Node[tx] | ||
nodes, indexer := newNodes(t, 1) | ||
node = nodes[0] | ||
node.validityWindowDuration = validationWindow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be an argument to create new nodes instead of setting it after constructing the nodes?
x/dsmr/node_test.go
Outdated
r.NoError(node.Verify(context.Background(), parentBlk, blk)) | ||
|
||
r.NoError(node.Accept(context.Background(), blk)) | ||
indexer.set(blk.GetID(), NewExecutionBlock(blk)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The snow refactor separates the chain index from Accept
, so it's reasonable imo to do the same thing here imo
x/dsmr/node_test.go
Outdated
} | ||
|
||
return result | ||
return result, indexer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer not since the "indexer" is unique per node in practice
…s/hypersdk into tsachi/refactor_validity_window3
What ?
Integrate the generic de-deduplication logic into the dsmr