Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Ev-node is the basis of the Evolve Stack. For more in-depth information about Ev
## Using Evolve

Evolve supports multiple sync modes:

- **Hybrid sync**: Sync from both DA layer and P2P network (default when peers are configured)
- **DA-only sync**: Sync exclusively from DA layer by leaving P2P peers empty (see [Configuration Guide](docs/learn/config.md#da-only-sync-mode))
- **P2P-priority sync**: Prioritize P2P with DA as fallback
Expand Down
20 changes: 19 additions & 1 deletion block/CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ The block package is the core of ev-node's block management system. It handles b
## Core Components

### Manager (`manager.go`)

- **Purpose**: Central orchestrator for all block operations
- **Key Responsibilities**:
- Transaction aggregation into blocks
Expand All @@ -18,6 +19,7 @@ The block package is the core of ev-node's block management system. It handles b
- P2P block/header gossiping

### Aggregation (`aggregation.go`, `lazy_aggregation_test.go`)

- **Purpose**: Collects transactions from mempool and creates blocks
- **Modes**:
- **Normal Mode**: Produces blocks at regular intervals (BlockTime)
Expand All @@ -28,6 +30,7 @@ The block package is the core of ev-node's block management system. It handles b
- `normalAggregationLoop`: Regular block production

### Synchronization (`sync.go`, `sync_test.go`)

- **Purpose**: Keeps the node synchronized with the network
- **Key Functions**:
- `SyncLoop`: Main synchronization loop
Expand All @@ -36,6 +39,7 @@ The block package is the core of ev-node's block management system. It handles b
- Handles header and data caching

### Data Availability (`da_includer.go`, `submitter.go`, `retriever.go`)

- **DA Includer**: Manages DA blob inclusion proofs and validation
- **Submitter**: Handles block submission to the DA layer with retry logic
- **Retriever**: Fetches blocks from the DA layer
Expand All @@ -45,6 +49,7 @@ The block package is the core of ev-node's block management system. It handles b
- Batch submission optimization

### Storage (`store.go`, `store_test.go`)

- **Purpose**: Persistent storage for blocks and state
- **Key Features**:
- Block height tracking
Expand All @@ -53,6 +58,7 @@ The block package is the core of ev-node's block management system. It handles b
- Migration support for namespace changes

### Pending Blocks (`pending_base.go`, `pending_headers.go`, `pending_data.go`)

- **Purpose**: Manages blocks awaiting DA inclusion or validation
- **Components**:
- **PendingBase**: Base structure for pending blocks
Expand All @@ -64,6 +70,7 @@ The block package is the core of ev-node's block management system. It handles b
- Memory-efficient caching

### Metrics (`metrics.go`, `metrics_helpers.go`)

- **Purpose**: Performance monitoring and observability
- **Key Metrics**:
- Block production times
Expand All @@ -74,20 +81,23 @@ The block package is the core of ev-node's block management system. It handles b
## Key Workflows

### Block Production Flow

1. Transactions collected from mempool
2. Block created with proper header and data
3. Block executed through executor
4. Block submitted to DA layer
5. Block gossiped to P2P network

### Synchronization Flow

1. Headers received from P2P network
2. Headers validated and cached
3. Block data retrieved from DA layer
4. Blocks applied to state
5. Sync progress updated

### DA Submission Flow

1. Block prepared for submission
2. Blob created with block data
3. Submission attempted with retries
Expand All @@ -97,47 +107,55 @@ The block package is the core of ev-node's block management system. It handles b
## Configuration

### Time Parameters

- `BlockTime`: Target time between blocks (default: 1s)
- `DABlockTime`: DA layer block time (default: 6s)
- `LazyBlockTime`: Max time between blocks in lazy mode (default: 60s)

### Limits

- `maxSubmitAttempts`: Max DA submission retries (30)
- `defaultMempoolTTL`: Blocks until tx dropped (25)

## Testing Strategy

### Unit Tests

- Test individual components in isolation
- Mock external dependencies (DA, executor, sequencer)
- Focus on edge cases and error conditions

### Integration Tests

- Test component interactions
- Verify block flow from creation to storage
- Test synchronization scenarios

### Performance Tests (`da_speed_test.go`)

- Measure DA submission performance
- Test batch processing efficiency
- Validate metrics accuracy

## Common Development Tasks

### Adding a New DA Feature

1. Update DA interfaces in `core/da`
2. Modify `da_includer.go` for inclusion logic
3. Update `submitter.go` for submission flow
4. Add retrieval logic in `retriever.go`
5. Update tests and metrics

### Modifying Block Production

1. Update aggregation logic in `aggregation.go`
2. Adjust timing in Manager configuration
3. Update metrics collection
4. Test both normal and lazy modes

### Implementing New Sync Strategy

1. Modify `SyncLoop` in `sync.go`
2. Update pending block handling
3. Adjust cache strategies
Expand Down Expand Up @@ -182,4 +200,4 @@ The block package is the core of ev-node's block management system. It handles b
- Log with structured fields
- Return errors with context
- Use metrics for observability
- Test error conditions thoroughly
- Test error conditions thoroughly
44 changes: 43 additions & 1 deletion block/manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,28 @@ func NewManager(
return nil, err
}

if s.DAHeight < config.DA.StartHeight {
// Determine the DA height to start from based on the last applied block's DA inclusion
if s.LastBlockHeight > 0 {
// Try to find where the last applied block was included in DA
headerKey := fmt.Sprintf("%s/%d/h", storepkg.HeightToDAHeightKey, s.LastBlockHeight)
if daHeightBytes, err := store.GetMetadata(ctx, headerKey); err == nil && len(daHeightBytes) == 8 {
lastBlockDAHeight := binary.LittleEndian.Uint64(daHeightBytes)
// Start scanning from the next DA height after the last included block
s.DAHeight = lastBlockDAHeight + 1
logger.Info().
Uint64("lastBlockHeight", s.LastBlockHeight).
Uint64("lastBlockDAHeight", lastBlockDAHeight).
Uint64("startingDAHeight", s.DAHeight).
Msg("resuming DA scan from last applied block's DA inclusion height")
} else {
// Fallback: if we can't find DA inclusion info, use the persisted DA height
logger.Info().
Uint64("lastBlockHeight", s.LastBlockHeight).
Uint64("daHeight", s.DAHeight).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the DA height actually persisted? If it was then why would it always start from 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think there is another bug, that is hidden here. ive been going through the code slowly. there is a startup discrepancy

Msg("no DA inclusion metadata found for last block, using persisted DA height")
}
Comment on lines +324 to +339
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to determine the starting DA height could be more explicit about handling different failure cases. Currently, a database error and a 'not found' error are treated the same way, which can hide underlying storage problems. Refactoring this block to check for the error from store.GetMetadata first, and then the length of the returned bytes, would make the code clearer and improve error logging.

                daHeightBytes, err := store.GetMetadata(ctx, headerKey)
		if err == nil && len(daHeightBytes) == 8 {
			lastBlockDAHeight := binary.LittleEndian.Uint64(daHeightBytes)
			// Start scanning from the next DA height after the last included block
			s.DAHeight = lastBlockDAHeight + 1
			logger.Info().
				Uint64("lastBlockHeight", s.LastBlockHeight).
				Uint64("lastBlockDAHeight", lastBlockDAHeight).
				Uint64("startingDAHeight", s.DAHeight).
				Msg("resuming DA scan from last applied block's DA inclusion height")
		} else {
			if err != nil {
				if !errors.Is(err, ds.ErrNotFound) {
					logger.Warn().Err(err).Msg("Error retrieving DA inclusion metadata for last block")
				}
			} else { // err == nil, so len is wrong
				logger.Warn().Int("len", len(daHeightBytes)).Msg("malformed DA inclusion metadata for last block")
			}
			// Fallback: if we can't find DA inclusion info, use the persisted DA height
			logger.Info().
				Uint64("lastBlockHeight", s.LastBlockHeight).
				Uint64("daHeight", s.DAHeight).
				Msg("no DA inclusion metadata found for last block, using persisted DA height")
		}

} else if s.DAHeight < config.DA.StartHeight {
// For fresh chains, use the configured start height
s.DAHeight = config.DA.StartHeight
}

Expand Down Expand Up @@ -519,6 +540,27 @@ func (m *Manager) IsDAIncluded(ctx context.Context, height uint64) (bool, error)
return isIncluded, nil
}

// storeDAInclusionMetadata stores the DA height where a block was included.
// This is used by both aggregators (via SetSequencerHeightToDAHeight) and
// non-aggregator nodes during sync to track where blocks came from in DA.
func (m *Manager) storeDAInclusionMetadata(ctx context.Context, blockHeight uint64, daHeight uint64) error {
// Store header DA height
headerHeightBytes := make([]byte, 8)
binary.LittleEndian.PutUint64(headerHeightBytes, daHeight)
headerKey := fmt.Sprintf("%s/%d/h", storepkg.HeightToDAHeightKey, blockHeight)
if err := m.store.SetMetadata(ctx, headerKey, headerHeightBytes); err != nil {
return fmt.Errorf("failed to store header DA height: %w", err)
}

// Store data DA height (same as header for synced blocks)
dataKey := fmt.Sprintf("%s/%d/d", storepkg.HeightToDAHeightKey, blockHeight)
if err := m.store.SetMetadata(ctx, dataKey, headerHeightBytes); err != nil {
return fmt.Errorf("failed to store data DA height: %w", err)
}

return nil
}

// SetSequencerHeightToDAHeight stores the mapping from a Evolve block height to the corresponding
// DA (Data Availability) layer heights where the block's header and data were included.
// This mapping is persisted in the store metadata and is used to track which DA heights
Expand Down
10 changes: 10 additions & 0 deletions block/sync.go
Original file line number Diff line number Diff line change
Expand Up @@ -172,6 +172,16 @@ func (m *Manager) trySyncNextBlock(ctx context.Context, daHeight uint64) error {
return err
}

// Store the DA inclusion metadata for this block
// This is crucial for non-aggregator nodes to resume from the correct DA height on restart
if err = m.storeDAInclusionMetadata(ctx, hHeight, daHeight); err != nil {
m.logger.Error().Err(err).
Uint64("blockHeight", hHeight).
Uint64("daHeight", daHeight).
Msg("failed to store DA inclusion metadata during sync")
// Don't fail the sync, just log the error
}

// Record sync metrics
m.recordSyncMetrics("block_applied")

Expand Down
47 changes: 46 additions & 1 deletion block/sync_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,13 @@ func TestSyncLoop_ProcessSingleBlock_HeaderFirst(t *testing.T) {

mockStore.On("SetHeight", mock.Anything, newHeight).Return(nil).Once()

// Add expectations for DA inclusion metadata storage
// These calls happen in storeDAInclusionMetadata when syncing blocks
headerKey := fmt.Sprintf("%s/%d/h", store.HeightToDAHeightKey, newHeight)
dataKey := fmt.Sprintf("%s/%d/d", store.HeightToDAHeightKey, newHeight)
mockStore.On("SetMetadata", mock.Anything, headerKey, mock.Anything).Return(nil).Once()
mockStore.On("SetMetadata", mock.Anything, dataKey, mock.Anything).Return(nil).Once()

ctx, loopCancel := context.WithTimeout(context.Background(), 1*time.Second)
defer loopCancel()

Expand Down Expand Up @@ -219,6 +226,13 @@ func TestSyncLoop_ProcessSingleBlock_DataFirst(t *testing.T) {
mockStore.On("UpdateState", mock.Anything, expectedNewState).Return(nil).Run(func(args mock.Arguments) { close(syncChan) }).Once()
mockStore.On("SetHeight", mock.Anything, newHeight).Return(nil).Once()

// Add expectations for DA inclusion metadata storage
// These calls happen in storeDAInclusionMetadata when syncing blocks
headerKey := fmt.Sprintf("%s/%d/h", store.HeightToDAHeightKey, newHeight)
dataKey := fmt.Sprintf("%s/%d/d", store.HeightToDAHeightKey, newHeight)
mockStore.On("SetMetadata", mock.Anything, headerKey, mock.Anything).Return(nil).Once()
mockStore.On("SetMetadata", mock.Anything, dataKey, mock.Anything).Return(nil).Once()

ctx, loopCancel := context.WithTimeout(context.Background(), 1*time.Second)
defer loopCancel()

Expand Down Expand Up @@ -330,6 +344,12 @@ func TestSyncLoop_ProcessMultipleBlocks_Sequentially(t *testing.T) {
}).
Once()

// Add expectations for DA inclusion metadata storage for H+1
headerKeyH1 := fmt.Sprintf("%s/%d/h", store.HeightToDAHeightKey, heightH1)
dataKeyH1 := fmt.Sprintf("%s/%d/d", store.HeightToDAHeightKey, heightH1)
mockStore.On("SetMetadata", mock.Anything, headerKeyH1, mock.Anything).Return(nil).Once()
mockStore.On("SetMetadata", mock.Anything, dataKeyH1, mock.Anything).Return(nil).Once()

// --- Mock Expectations for H+2 ---
mockExec.On("ExecuteTxs", mock.Anything, txsH2, heightH2, headerH2.Time(), expectedNewAppHashH1).
Return(expectedNewAppHashH2, uint64(100), nil).Once()
Expand All @@ -343,6 +363,12 @@ func TestSyncLoop_ProcessMultipleBlocks_Sequentially(t *testing.T) {
}).
Once()

// Add expectations for DA inclusion metadata storage for H+2
headerKeyH2 := fmt.Sprintf("%s/%d/h", store.HeightToDAHeightKey, heightH2)
dataKeyH2 := fmt.Sprintf("%s/%d/d", store.HeightToDAHeightKey, heightH2)
mockStore.On("SetMetadata", mock.Anything, headerKeyH2, mock.Anything).Return(nil).Once()
mockStore.On("SetMetadata", mock.Anything, dataKeyH2, mock.Anything).Return(nil).Once()

ctx, loopCancel := context.WithTimeout(context.Background(), 1*time.Second)
defer loopCancel()

Expand Down Expand Up @@ -467,10 +493,16 @@ func TestSyncLoop_ProcessBlocks_OutOfOrderArrival(t *testing.T) {
Run(func(args mock.Arguments) {
newHeight := args.Get(1).(uint64)
*heightPtr = newHeight // Update the mocked height
t.Logf("Mock SetHeight called for H+2, updated mock height to %d", newHeight)
t.Logf("Mock SetHeight called for H+1, updated mock height to %d", newHeight)
}).
Once()

// Add expectations for DA inclusion metadata storage for H+1
headerKeyH1 := fmt.Sprintf("%s/%d/h", store.HeightToDAHeightKey, heightH1)
dataKeyH1 := fmt.Sprintf("%s/%d/d", store.HeightToDAHeightKey, heightH1)
mockStore.On("SetMetadata", mock.Anything, headerKeyH1, mock.Anything).Return(nil).Once()
mockStore.On("SetMetadata", mock.Anything, dataKeyH1, mock.Anything).Return(nil).Once()

// --- Mock Expectations for H+2 (will be called second) ---
mockStore.On("Height", mock.Anything).Return(heightH1, nil).Maybe()
mockExec.On("Validate", mock.Anything, &headerH2.Header, dataH2).Return(nil).Maybe()
Expand All @@ -488,6 +520,12 @@ func TestSyncLoop_ProcessBlocks_OutOfOrderArrival(t *testing.T) {
Run(func(args mock.Arguments) { close(syncChanH2) }).
Once()

// Add expectations for DA inclusion metadata storage for H+2
headerKeyH2 := fmt.Sprintf("%s/%d/h", store.HeightToDAHeightKey, heightH2)
dataKeyH2 := fmt.Sprintf("%s/%d/d", store.HeightToDAHeightKey, heightH2)
mockStore.On("SetMetadata", mock.Anything, headerKeyH2, mock.Anything).Return(nil).Once()
mockStore.On("SetMetadata", mock.Anything, dataKeyH2, mock.Anything).Return(nil).Once()

ctx, loopCancel := context.WithTimeout(context.Background(), 1*time.Second)
defer loopCancel()

Expand Down Expand Up @@ -597,6 +635,13 @@ func TestSyncLoop_IgnoreDuplicateEvents(t *testing.T) {
Run(func(args mock.Arguments) { close(syncChanH1) }).
Once()

// Add expectations for DA inclusion metadata storage
// These calls happen in storeDAInclusionMetadata when syncing blocks
headerKey := fmt.Sprintf("%s/%d/h", store.HeightToDAHeightKey, heightH1)
dataKey := fmt.Sprintf("%s/%d/d", store.HeightToDAHeightKey, heightH1)
mockStore.On("SetMetadata", mock.Anything, headerKey, mock.Anything).Return(nil).Once()
mockStore.On("SetMetadata", mock.Anything, dataKey, mock.Anything).Return(nil).Once()

ctx, loopCancel := context.WithTimeout(context.Background(), 1*time.Second)
defer loopCancel()

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/gm-world.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: GM World tutorial
description: Learn how to build and deploy a CosmWasm-based "gm" (good morning) application using Evolve.
description: Learn how to build and deploy a "gm" (good morning) application using Evolve.
---

# GM world chain
Expand Down
5 changes: 4 additions & 1 deletion docs/learn/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,12 +62,14 @@ Evolve supports running nodes that sync exclusively from the Data Availability (
**To enable DA-only sync mode:**

1. **Leave P2P peers empty** (default behavior):

```yaml
p2p:
peers: "" # Empty or omit this field entirely
```

2. **Configure DA connection** (required):

```yaml
da:
address: "your-da-service:port"
Expand All @@ -78,6 +80,7 @@ Evolve supports running nodes that sync exclusively from the Data Availability (
3. **Optional**: You can still configure P2P listen address for potential future connections, but without peers, no P2P networking will occur.

When running in DA-only mode, the node will:

- ✅ Sync blocks and headers from the DA layer
- ✅ Validate transactions and maintain state
- ✅ Serve RPC requests
Expand Down
12 changes: 6 additions & 6 deletions docs/learn/specs/block-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -267,23 +267,23 @@ flowchart TD
B -->|Mempool/Not Included| E[Mempool Backoff Strategy]
B -->|Context Canceled| F[Stop Submission]
B -->|Other Error| G[Exponential Backoff]

D -->|Yes| H[Recursive Batch Splitting]
D -->|No| I[Skip Single Item - Cannot Split]

E --> J[Set Backoff = MempoolTTL * BlockTime]
E --> K[Multiply Gas Price by GasMultiplier]

G --> L[Double Backoff Time]
G --> M[Cap at MaxBackoff - BlockTime]

H --> N[Split into Two Halves]
N --> O[Submit First Half]
O --> P[Submit Second Half]
P --> Q{Both Halves Processed?}
Q -->|Yes| R[Combine Results]
Q -->|No| S[Handle Partial Success]

C --> T[Update Pending Queues]
T --> U[Post-Submit Actions]
```
Expand All @@ -295,7 +295,7 @@ flowchart TD
* Exponential backoff for general failures (doubles each attempt, capped at `BlockTime`)
* Mempool-specific backoff (waits `MempoolTTL * BlockTime` for stuck transactions)
* Success-based backoff reset with gas price reduction
* **Gas Price Management**:
* **Gas Price Management**:
* Increases gas price by `GasMultiplier` on mempool failures
* Decreases gas price after successful submissions (bounded by initial price)
* Supports automatic gas price detection (`-1` value)
Expand Down
Loading
Loading