Skip to content

Comments

fix: only subscribe to sampled subnets#8181

Merged
wemeetagain merged 6 commits intounstablefrom
te/peerDAS_do_not_subscribe_to_all_subnets_3
Aug 13, 2025
Merged

fix: only subscribe to sampled subnets#8181
wemeetagain merged 6 commits intounstablefrom
te/peerDAS_do_not_subscribe_to_all_subnets_3

Conversation

@twoeths
Copy link
Contributor

@twoeths twoeths commented Aug 12, 2025

Motivation

  • right now we always subscribe to all column subnets which is not necessary because we only need columns that we custody/sample
  • that's why I see 128 logs like this per slot, and this could degrade performance. We only need 8 per slot
 debug: Received gossip dataColumn slot=198378, root=0x9945…49af, curentSlot=198378, peerId=16Uiu2HAm2J4ZRFnf7qWvu8VBRFT66E5XFWnRTwYTahB2oRJom5ji, delaySec=1.5910000801086426, gossipIndex=113, columnIndex=113, pending=data_column, haveColumns=31, expectedColumns=8, recvToValLatency=0.0009999275207519531, recvToValidation=0.0009999275207519531, validationTime=0

Description

  • only subscribe to custody/sampling subnets
  • track sent peers per data column subnet when publishing blocks

Test result

@codecov
Copy link

codecov bot commented Aug 12, 2025

Codecov Report

❌ Patch coverage is 21.81818% with 43 lines in your changes missing coverage. Please review.
✅ Project coverage is 54.22%. Comparing base (aac4d9d) to head (fd2724c).
⚠️ Report is 6 commits behind head on unstable.

Additional details and impacted files
@@            Coverage Diff            @@
##           unstable    #8181   +/-   ##
=========================================
  Coverage     54.22%   54.22%           
=========================================
  Files           843      843           
  Lines         63365    63396   +31     
  Branches       4795     4794    -1     
=========================================
+ Hits          34361    34378   +17     
- Misses        28928    28943   +15     
+ Partials         76       75    -1     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions
Copy link
Contributor

github-actions bot commented Aug 12, 2025

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 192a80f Previous: ee99d3f Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 1.1009 ms/op 768.84 us/op 1.43
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 40.304 us/op 28.801 us/op 1.40
BLS verify - blst 1.1299 ms/op 840.75 us/op 1.34
BLS verifyMultipleSignatures 3 - blst 1.4358 ms/op 1.3554 ms/op 1.06
BLS verifyMultipleSignatures 8 - blst 2.1454 ms/op 2.0672 ms/op 1.04
BLS verifyMultipleSignatures 32 - blst 6.5352 ms/op 4.4808 ms/op 1.46
BLS verifyMultipleSignatures 64 - blst 11.543 ms/op 8.3589 ms/op 1.38
BLS verifyMultipleSignatures 128 - blst 22.550 ms/op 15.943 ms/op 1.41
BLS deserializing 10000 signatures 773.66 ms/op 637.67 ms/op 1.21
BLS deserializing 100000 signatures 7.5993 s/op 6.5210 s/op 1.17
BLS verifyMultipleSignatures - same message - 3 - blst 971.83 us/op 999.61 us/op 0.97
BLS verifyMultipleSignatures - same message - 8 - blst 1.0831 ms/op 1.0257 ms/op 1.06
BLS verifyMultipleSignatures - same message - 32 - blst 1.7553 ms/op 1.6635 ms/op 1.06
BLS verifyMultipleSignatures - same message - 64 - blst 2.6757 ms/op 2.4828 ms/op 1.08
BLS verifyMultipleSignatures - same message - 128 - blst 4.4906 ms/op 4.2213 ms/op 1.06
BLS aggregatePubkeys 32 - blst 20.221 us/op 17.978 us/op 1.12
BLS aggregatePubkeys 128 - blst 73.089 us/op 64.995 us/op 1.12
notSeenSlots=1 numMissedVotes=1 numBadVotes=10 59.277 ms/op 37.661 ms/op 1.57
notSeenSlots=1 numMissedVotes=0 numBadVotes=4 54.046 ms/op 44.697 ms/op 1.21
notSeenSlots=2 numMissedVotes=1 numBadVotes=10 46.756 ms/op 34.682 ms/op 1.35
getSlashingsAndExits - default max 72.754 us/op 51.419 us/op 1.41
getSlashingsAndExits - 2k 276.63 us/op 370.67 us/op 0.75
proposeBlockBody type=full, size=empty 5.4657 ms/op 5.8195 ms/op 0.94
isKnown best case - 1 super set check 204.00 ns/op 420.00 ns/op 0.49
isKnown normal case - 2 super set checks 203.00 ns/op 420.00 ns/op 0.48
isKnown worse case - 16 super set checks 204.00 ns/op 421.00 ns/op 0.48
InMemoryCheckpointStateCache - add get delete 2.3920 us/op 2.5920 us/op 0.92
validate api signedAggregateAndProof - struct 1.7261 ms/op 1.7589 ms/op 0.98
validate gossip signedAggregateAndProof - struct 1.7251 ms/op 1.8571 ms/op 0.93
batch validate gossip attestation - vc 640000 - chunk 32 112.06 us/op 110.27 us/op 1.02
batch validate gossip attestation - vc 640000 - chunk 64 99.937 us/op 97.741 us/op 1.02
batch validate gossip attestation - vc 640000 - chunk 128 96.073 us/op 93.402 us/op 1.03
batch validate gossip attestation - vc 640000 - chunk 256 97.072 us/op 96.842 us/op 1.00
pickEth1Vote - no votes 981.94 us/op 797.97 us/op 1.23
pickEth1Vote - max votes 5.9251 ms/op 7.4245 ms/op 0.80
pickEth1Vote - Eth1Data hashTreeRoot value x2048 10.672 ms/op 11.864 ms/op 0.90
pickEth1Vote - Eth1Data hashTreeRoot tree x2048 17.160 ms/op 22.489 ms/op 0.76
pickEth1Vote - Eth1Data fastSerialize value x2048 506.50 us/op 351.74 us/op 1.44
pickEth1Vote - Eth1Data fastSerialize tree x2048 2.2901 ms/op 3.1452 ms/op 0.73
bytes32 toHexString 374.00 ns/op 538.00 ns/op 0.70
bytes32 Buffer.toString(hex) 262.00 ns/op 447.00 ns/op 0.59
bytes32 Buffer.toString(hex) from Uint8Array 345.00 ns/op 520.00 ns/op 0.66
bytes32 Buffer.toString(hex) + 0x 260.00 ns/op 443.00 ns/op 0.59
Object access 1 prop 0.12300 ns/op 0.32800 ns/op 0.38
Map access 1 prop 0.13300 ns/op 0.33100 ns/op 0.40
Object get x1000 5.9570 ns/op 5.1430 ns/op 1.16
Map get x1000 6.5030 ns/op 5.7800 ns/op 1.13
Object set x1000 28.349 ns/op 25.361 ns/op 1.12
Map set x1000 19.122 ns/op 18.924 ns/op 1.01
Return object 10000 times 0.29260 ns/op 0.29740 ns/op 0.98
Throw Error 10000 times 4.2426 us/op 3.6538 us/op 1.16
toHex 140.60 ns/op 100.19 ns/op 1.40
Buffer.from 122.14 ns/op 91.431 ns/op 1.34
shared Buffer 84.091 ns/op 65.912 ns/op 1.28
fastMsgIdFn sha256 / 200 bytes 2.1430 us/op 2.0450 us/op 1.05
fastMsgIdFn h32 xxhash / 200 bytes 210.00 ns/op 404.00 ns/op 0.52
fastMsgIdFn h64 xxhash / 200 bytes 264.00 ns/op 464.00 ns/op 0.57
fastMsgIdFn sha256 / 1000 bytes 7.0290 us/op 5.9340 us/op 1.18
fastMsgIdFn h32 xxhash / 1000 bytes 336.00 ns/op 526.00 ns/op 0.64
fastMsgIdFn h64 xxhash / 1000 bytes 336.00 ns/op 530.00 ns/op 0.63
fastMsgIdFn sha256 / 10000 bytes 62.900 us/op 49.287 us/op 1.28
fastMsgIdFn h32 xxhash / 10000 bytes 1.7590 us/op 1.9410 us/op 0.91
fastMsgIdFn h64 xxhash / 10000 bytes 1.1600 us/op 1.3660 us/op 0.85
send data - 1000 256B messages 14.507 ms/op 17.316 ms/op 0.84
send data - 1000 512B messages 18.054 ms/op 21.782 ms/op 0.83
send data - 1000 1024B messages 25.534 ms/op 25.482 ms/op 1.00
send data - 1000 1200B messages 25.423 ms/op 20.623 ms/op 1.23
send data - 1000 2048B messages 26.336 ms/op 22.203 ms/op 1.19
send data - 1000 4096B messages 27.014 ms/op 29.426 ms/op 0.92
send data - 1000 16384B messages 42.661 ms/op 38.293 ms/op 1.11
send data - 1000 65536B messages 110.97 ms/op 79.861 ms/op 1.39
enrSubnets - fastDeserialize 64 bits 931.00 ns/op 989.00 ns/op 0.94
enrSubnets - ssz BitVector 64 bits 330.00 ns/op 500.00 ns/op 0.66
enrSubnets - fastDeserialize 4 bits 135.00 ns/op 333.00 ns/op 0.41
enrSubnets - ssz BitVector 4 bits 326.00 ns/op 500.00 ns/op 0.65
prioritizePeers score -10:0 att 32-0.1 sync 2-0 122.07 us/op 103.64 us/op 1.18
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 143.36 us/op 130.53 us/op 1.10
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 205.24 us/op 210.45 us/op 0.98
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 389.07 us/op 386.01 us/op 1.01
prioritizePeers score 0:0 att 64-1 sync 4-1 470.46 us/op 482.83 us/op 0.97
array of 16000 items push then shift 1.6122 us/op 1.3400 us/op 1.20
LinkedList of 16000 items push then shift 7.0240 ns/op 6.3700 ns/op 1.10
array of 16000 items push then pop 75.228 ns/op 64.846 ns/op 1.16
LinkedList of 16000 items push then pop 6.9420 ns/op 6.2090 ns/op 1.12
array of 24000 items push then shift 2.4114 us/op 1.9271 us/op 1.25
LinkedList of 24000 items push then shift 7.0730 ns/op 6.7250 ns/op 1.05
array of 24000 items push then pop 99.924 ns/op 102.18 ns/op 0.98
LinkedList of 24000 items push then pop 7.2420 ns/op 6.3660 ns/op 1.14
intersect bitArray bitLen 8 6.4020 ns/op 5.4710 ns/op 1.17
intersect array and set length 8 38.009 ns/op 32.834 ns/op 1.16
intersect bitArray bitLen 128 29.814 ns/op 26.901 ns/op 1.11
intersect array and set length 128 623.86 ns/op 549.11 ns/op 1.14
bitArray.getTrueBitIndexes() bitLen 128 1.0150 us/op 1.1810 us/op 0.86
bitArray.getTrueBitIndexes() bitLen 248 1.7850 us/op 1.8820 us/op 0.95
bitArray.getTrueBitIndexes() bitLen 512 3.6350 us/op 3.7190 us/op 0.98
Buffer.concat 32 items 617.00 ns/op 752.00 ns/op 0.82
Uint8Array.set 32 items 1.0500 us/op 1.2170 us/op 0.86
Buffer.copy 2.1880 us/op 2.4350 us/op 0.90
Uint8Array.set - with subarray 2.2170 us/op 1.7760 us/op 1.25
Uint8Array.set - without subarray 980.00 ns/op 1.1550 us/op 0.85
getUint32 - dataview 200.00 ns/op 388.00 ns/op 0.52
getUint32 - manual 134.00 ns/op 323.00 ns/op 0.41
Set add up to 64 items then delete first 2.1927 us/op 1.7712 us/op 1.24
OrderedSet add up to 64 items then delete first 3.3193 us/op 2.7681 us/op 1.20
Set add up to 64 items then delete last 2.6809 us/op 2.1367 us/op 1.25
OrderedSet add up to 64 items then delete last 3.6099 us/op 3.0613 us/op 1.18
Set add up to 64 items then delete middle 2.3077 us/op 2.0697 us/op 1.12
OrderedSet add up to 64 items then delete middle 6.0989 us/op 4.6166 us/op 1.32
Set add up to 128 items then delete first 4.9861 us/op 4.1721 us/op 1.20
OrderedSet add up to 128 items then delete first 7.8919 us/op 6.6090 us/op 1.19
Set add up to 128 items then delete last 4.8474 us/op 4.3363 us/op 1.12
OrderedSet add up to 128 items then delete last 7.2116 us/op 6.1740 us/op 1.17
Set add up to 128 items then delete middle 4.7388 us/op 4.0023 us/op 1.18
OrderedSet add up to 128 items then delete middle 13.720 us/op 12.191 us/op 1.13
Set add up to 256 items then delete first 10.212 us/op 8.0472 us/op 1.27
OrderedSet add up to 256 items then delete first 15.882 us/op 12.848 us/op 1.24
Set add up to 256 items then delete last 9.7262 us/op 7.7724 us/op 1.25
OrderedSet add up to 256 items then delete last 14.326 us/op 12.019 us/op 1.19
Set add up to 256 items then delete middle 9.5831 us/op 7.8531 us/op 1.22
OrderedSet add up to 256 items then delete middle 40.851 us/op 38.815 us/op 1.05
transfer serialized Status (84 B) 2.2150 us/op 2.0250 us/op 1.09
copy serialized Status (84 B) 1.4920 us/op 1.3610 us/op 1.10
transfer serialized SignedVoluntaryExit (112 B) 2.2290 us/op 2.0480 us/op 1.09
copy serialized SignedVoluntaryExit (112 B) 1.1860 us/op 1.3450 us/op 0.88
transfer serialized ProposerSlashing (416 B) 2.3090 us/op 2.4000 us/op 0.96
copy serialized ProposerSlashing (416 B) 1.2500 us/op 1.5490 us/op 0.81
transfer serialized Attestation (485 B) 2.3170 us/op 2.0530 us/op 1.13
copy serialized Attestation (485 B) 1.2640 us/op 1.5280 us/op 0.83
transfer serialized AttesterSlashing (33232 B) 2.3890 us/op 2.2560 us/op 1.06
copy serialized AttesterSlashing (33232 B) 3.2850 us/op 3.3480 us/op 0.98
transfer serialized Small SignedBeaconBlock (128000 B) 2.7730 us/op 2.1330 us/op 1.30
copy serialized Small SignedBeaconBlock (128000 B) 9.1110 us/op 5.5400 us/op 1.64
transfer serialized Avg SignedBeaconBlock (200000 B) 3.1100 us/op 2.3900 us/op 1.30
copy serialized Avg SignedBeaconBlock (200000 B) 12.854 us/op 8.6160 us/op 1.49
transfer serialized BlobsSidecar (524380 B) 3.1540 us/op 3.1600 us/op 1.00
copy serialized BlobsSidecar (524380 B) 95.009 us/op 77.598 us/op 1.22
transfer serialized Big SignedBeaconBlock (1000000 B) 3.7120 us/op 3.7260 us/op 1.00
copy serialized Big SignedBeaconBlock (1000000 B) 119.04 us/op 255.14 us/op 0.47
pass gossip attestations to forkchoice per slot 2.7951 ms/op 3.1009 ms/op 0.90
forkChoice updateHead vc 100000 bc 64 eq 0 461.14 us/op 362.73 us/op 1.27
forkChoice updateHead vc 600000 bc 64 eq 0 2.8717 ms/op 2.1789 ms/op 1.32
forkChoice updateHead vc 1000000 bc 64 eq 0 4.9814 ms/op 4.6748 ms/op 1.07
forkChoice updateHead vc 600000 bc 320 eq 0 3.2083 ms/op 2.9247 ms/op 1.10
forkChoice updateHead vc 600000 bc 1200 eq 0 3.0459 ms/op 2.7617 ms/op 1.10
forkChoice updateHead vc 600000 bc 7200 eq 0 3.2566 ms/op 2.7859 ms/op 1.17
forkChoice updateHead vc 600000 bc 64 eq 1000 10.352 ms/op 9.4914 ms/op 1.09
forkChoice updateHead vc 600000 bc 64 eq 10000 10.571 ms/op 9.3189 ms/op 1.13
forkChoice updateHead vc 600000 bc 64 eq 300000 14.014 ms/op 11.126 ms/op 1.26
computeDeltas 500000 validators 300 proto nodes 4.2811 ms/op 3.3401 ms/op 1.28
computeDeltas 500000 validators 1200 proto nodes 4.4492 ms/op 3.3599 ms/op 1.32
computeDeltas 500000 validators 7200 proto nodes 4.6558 ms/op 3.1308 ms/op 1.49
computeDeltas 750000 validators 300 proto nodes 6.2809 ms/op 4.7186 ms/op 1.33
computeDeltas 750000 validators 1200 proto nodes 6.7904 ms/op 4.7857 ms/op 1.42
computeDeltas 750000 validators 7200 proto nodes 6.5690 ms/op 4.7445 ms/op 1.38
computeDeltas 1400000 validators 300 proto nodes 11.467 ms/op 8.9989 ms/op 1.27
computeDeltas 1400000 validators 1200 proto nodes 11.691 ms/op 9.1301 ms/op 1.28
computeDeltas 1400000 validators 7200 proto nodes 11.630 ms/op 9.1943 ms/op 1.26
computeDeltas 2100000 validators 300 proto nodes 17.540 ms/op 14.403 ms/op 1.22
computeDeltas 2100000 validators 1200 proto nodes 17.891 ms/op 14.069 ms/op 1.27
computeDeltas 2100000 validators 7200 proto nodes 17.624 ms/op 13.762 ms/op 1.28
altair processAttestation - 250000 vs - 7PWei normalcase 2.5115 ms/op 2.2620 ms/op 1.11
altair processAttestation - 250000 vs - 7PWei worstcase 4.2077 ms/op 2.9472 ms/op 1.43
altair processAttestation - setStatus - 1/6 committees join 146.95 us/op 103.29 us/op 1.42
altair processAttestation - setStatus - 1/3 committees join 263.74 us/op 192.70 us/op 1.37
altair processAttestation - setStatus - 1/2 committees join 363.74 us/op 285.96 us/op 1.27
altair processAttestation - setStatus - 2/3 committees join 463.56 us/op 368.63 us/op 1.26
altair processAttestation - setStatus - 4/5 committees join 887.08 us/op 511.51 us/op 1.73
altair processAttestation - setStatus - 100% committees join 726.63 us/op 612.64 us/op 1.19
altair processBlock - 250000 vs - 7PWei normalcase 5.4373 ms/op 3.3786 ms/op 1.61
altair processBlock - 250000 vs - 7PWei normalcase hashState 32.926 ms/op 30.539 ms/op 1.08
altair processBlock - 250000 vs - 7PWei worstcase 48.270 ms/op 32.566 ms/op 1.48
altair processBlock - 250000 vs - 7PWei worstcase hashState 85.998 ms/op 65.959 ms/op 1.30
phase0 processBlock - 250000 vs - 7PWei normalcase 1.7723 ms/op 1.2233 ms/op 1.45
phase0 processBlock - 250000 vs - 7PWei worstcase 26.811 ms/op 21.299 ms/op 1.26
altair processEth1Data - 250000 vs - 7PWei normalcase 382.00 us/op 280.51 us/op 1.36
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:15 6.6590 us/op 5.2880 us/op 1.26
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:219 40.138 us/op 34.574 us/op 1.16
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:42 12.108 us/op 9.3280 us/op 1.30
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:18 7.8390 us/op 5.7940 us/op 1.35
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1020 191.23 us/op 154.04 us/op 1.24
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11777 2.2507 ms/op 1.3558 ms/op 1.66
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 2.4594 ms/op 1.8121 ms/op 1.36
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 2.5163 ms/op 1.8181 ms/op 1.38
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 5.0491 ms/op 3.6453 ms/op 1.39
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.9340 ms/op 1.8387 ms/op 1.60
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 4.9054 ms/op 3.7619 ms/op 1.30
Tree 40 250000 create 459.33 ms/op 366.61 ms/op 1.25
Tree 40 250000 get(125000) 146.42 ns/op 108.73 ns/op 1.35
Tree 40 250000 set(125000) 1.5520 us/op 1.2231 us/op 1.27
Tree 40 250000 toArray() 19.737 ms/op 10.950 ms/op 1.80
Tree 40 250000 iterate all - toArray() + loop 19.168 ms/op 11.220 ms/op 1.71
Tree 40 250000 iterate all - get(i) 56.029 ms/op 48.620 ms/op 1.15
Array 250000 create 2.4456 ms/op 2.3918 ms/op 1.02
Array 250000 clone - spread 789.53 us/op 635.51 us/op 1.24
Array 250000 get(125000) 0.40500 ns/op 0.56800 ns/op 0.71
Array 250000 set(125000) 0.45100 ns/op 0.59700 ns/op 0.76
Array 250000 iterate all - loop 103.68 us/op 76.995 us/op 1.35
phase0 afterProcessEpoch - 250000 vs - 7PWei 41.382 ms/op 38.436 ms/op 1.08
Array.fill - length 1000000 3.5133 ms/op 2.4763 ms/op 1.42
Array push - length 1000000 13.826 ms/op 9.6303 ms/op 1.44
Array.get 0.27472 ns/op 0.26641 ns/op 1.03
Uint8Array.get 0.44231 ns/op 0.34837 ns/op 1.27
phase0 beforeProcessEpoch - 250000 vs - 7PWei 16.029 ms/op 14.436 ms/op 1.11
altair processEpoch - mainnet_e81889 287.69 ms/op 263.53 ms/op 1.09
mainnet_e81889 - altair beforeProcessEpoch 16.775 ms/op 15.880 ms/op 1.06
mainnet_e81889 - altair processJustificationAndFinalization 6.6590 us/op 4.7190 us/op 1.41
mainnet_e81889 - altair processInactivityUpdates 4.1533 ms/op 3.7631 ms/op 1.10
mainnet_e81889 - altair processRewardsAndPenalties 52.940 ms/op 35.191 ms/op 1.50
mainnet_e81889 - altair processRegistryUpdates 855.00 ns/op 944.00 ns/op 0.91
mainnet_e81889 - altair processSlashings 182.00 ns/op 413.00 ns/op 0.44
mainnet_e81889 - altair processEth1DataReset 174.00 ns/op 438.00 ns/op 0.40
mainnet_e81889 - altair processEffectiveBalanceUpdates 1.1793 ms/op 982.51 us/op 1.20
mainnet_e81889 - altair processSlashingsReset 926.00 ns/op 1.1680 us/op 0.79
mainnet_e81889 - altair processRandaoMixesReset 1.2180 us/op 1.4070 us/op 0.87
mainnet_e81889 - altair processHistoricalRootsUpdate 178.00 ns/op 406.00 ns/op 0.44
mainnet_e81889 - altair processParticipationFlagUpdates 524.00 ns/op 708.00 ns/op 0.74
mainnet_e81889 - altair processSyncCommitteeUpdates 140.00 ns/op 358.00 ns/op 0.39
mainnet_e81889 - altair afterProcessEpoch 43.655 ms/op 39.496 ms/op 1.11
capella processEpoch - mainnet_e217614 979.33 ms/op 916.42 ms/op 1.07
mainnet_e217614 - capella beforeProcessEpoch 66.357 ms/op 67.136 ms/op 0.99
mainnet_e217614 - capella processJustificationAndFinalization 5.4420 us/op 4.4200 us/op 1.23
mainnet_e217614 - capella processInactivityUpdates 14.177 ms/op 13.588 ms/op 1.04
mainnet_e217614 - capella processRewardsAndPenalties 194.87 ms/op 194.07 ms/op 1.00
mainnet_e217614 - capella processRegistryUpdates 6.5650 us/op 5.4130 us/op 1.21
mainnet_e217614 - capella processSlashings 195.00 ns/op 406.00 ns/op 0.48
mainnet_e217614 - capella processEth1DataReset 182.00 ns/op 403.00 ns/op 0.45
mainnet_e217614 - capella processEffectiveBalanceUpdates 4.3124 ms/op 3.4878 ms/op 1.24
mainnet_e217614 - capella processSlashingsReset 1.1090 us/op 1.2140 us/op 0.91
mainnet_e217614 - capella processRandaoMixesReset 1.3090 us/op 1.6010 us/op 0.82
mainnet_e217614 - capella processHistoricalRootsUpdate 197.00 ns/op 418.00 ns/op 0.47
mainnet_e217614 - capella processParticipationFlagUpdates 550.00 ns/op 731.00 ns/op 0.75
mainnet_e217614 - capella afterProcessEpoch 116.22 ms/op 106.96 ms/op 1.09
phase0 processEpoch - mainnet_e58758 321.55 ms/op 294.99 ms/op 1.09
mainnet_e58758 - phase0 beforeProcessEpoch 80.111 ms/op 78.272 ms/op 1.02
mainnet_e58758 - phase0 processJustificationAndFinalization 8.4950 us/op 5.5580 us/op 1.53
mainnet_e58758 - phase0 processRewardsAndPenalties 46.039 ms/op 38.022 ms/op 1.21
mainnet_e58758 - phase0 processRegistryUpdates 3.0900 us/op 2.8570 us/op 1.08
mainnet_e58758 - phase0 processSlashings 189.00 ns/op 409.00 ns/op 0.46
mainnet_e58758 - phase0 processEth1DataReset 176.00 ns/op 397.00 ns/op 0.44
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.1533 ms/op 918.45 us/op 1.26
mainnet_e58758 - phase0 processSlashingsReset 965.00 ns/op 1.0520 us/op 0.92
mainnet_e58758 - phase0 processRandaoMixesReset 1.2200 us/op 1.3170 us/op 0.93
mainnet_e58758 - phase0 processHistoricalRootsUpdate 198.00 ns/op 408.00 ns/op 0.49
mainnet_e58758 - phase0 processParticipationRecordUpdates 930.00 ns/op 1.2710 us/op 0.73
mainnet_e58758 - phase0 afterProcessEpoch 35.116 ms/op 33.370 ms/op 1.05
phase0 processEffectiveBalanceUpdates - 250000 normalcase 2.4301 ms/op 992.33 us/op 2.45
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 2.8588 ms/op 1.7337 ms/op 1.65
altair processInactivityUpdates - 250000 normalcase 20.375 ms/op 16.878 ms/op 1.21
altair processInactivityUpdates - 250000 worstcase 20.206 ms/op 19.826 ms/op 1.02
phase0 processRegistryUpdates - 250000 normalcase 7.2500 us/op 4.8260 us/op 1.50
phase0 processRegistryUpdates - 250000 badcase_full_deposits 382.45 us/op 275.96 us/op 1.39
phase0 processRegistryUpdates - 250000 worstcase 0.5 119.30 ms/op 85.074 ms/op 1.40
altair processRewardsAndPenalties - 250000 normalcase 27.968 ms/op 23.969 ms/op 1.17
altair processRewardsAndPenalties - 250000 worstcase 33.996 ms/op 25.491 ms/op 1.33
phase0 getAttestationDeltas - 250000 normalcase 6.9856 ms/op 5.7704 ms/op 1.21
phase0 getAttestationDeltas - 250000 worstcase 6.2437 ms/op 15.686 ms/op 0.40
phase0 processSlashings - 250000 worstcase 113.20 us/op 83.877 us/op 1.35
altair processSyncCommitteeUpdates - 250000 11.347 ms/op 9.4388 ms/op 1.20
BeaconState.hashTreeRoot - No change 227.00 ns/op 424.00 ns/op 0.54
BeaconState.hashTreeRoot - 1 full validator 90.669 us/op 57.908 us/op 1.57
BeaconState.hashTreeRoot - 32 full validator 766.20 us/op 618.78 us/op 1.24
BeaconState.hashTreeRoot - 512 full validator 12.402 ms/op 7.3039 ms/op 1.70
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 113.54 us/op 70.422 us/op 1.61
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.8639 ms/op 1.0637 ms/op 1.75
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 25.357 ms/op 14.977 ms/op 1.69
BeaconState.hashTreeRoot - 1 balances 80.419 us/op 56.744 us/op 1.42
BeaconState.hashTreeRoot - 32 balances 757.70 us/op 525.42 us/op 1.44
BeaconState.hashTreeRoot - 512 balances 8.8963 ms/op 5.5935 ms/op 1.59
BeaconState.hashTreeRoot - 250000 balances 225.04 ms/op 175.03 ms/op 1.29
aggregationBits - 2048 els - zipIndexesInBitList 23.409 us/op 20.818 us/op 1.12
byteArrayEquals 32 57.088 ns/op 48.133 ns/op 1.19
Buffer.compare 32 18.474 ns/op 15.091 ns/op 1.22
byteArrayEquals 1024 1.7033 us/op 1.2661 us/op 1.35
Buffer.compare 1024 26.288 ns/op 22.581 ns/op 1.16
byteArrayEquals 16384 26.936 us/op 19.776 us/op 1.36
Buffer.compare 16384 209.45 ns/op 163.33 ns/op 1.28
byteArrayEquals 123687377 195.83 ms/op 151.67 ms/op 1.29
Buffer.compare 123687377 6.4174 ms/op 5.3376 ms/op 1.20
byteArrayEquals 32 - diff last byte 53.420 ns/op 46.143 ns/op 1.16
Buffer.compare 32 - diff last byte 17.277 ns/op 15.161 ns/op 1.14
byteArrayEquals 1024 - diff last byte 1.5929 us/op 1.2281 us/op 1.30
Buffer.compare 1024 - diff last byte 24.932 ns/op 21.916 ns/op 1.14
byteArrayEquals 16384 - diff last byte 25.150 us/op 19.560 us/op 1.29
Buffer.compare 16384 - diff last byte 196.59 ns/op 192.76 ns/op 1.02
byteArrayEquals 123687377 - diff last byte 194.41 ms/op 149.69 ms/op 1.30
Buffer.compare 123687377 - diff last byte 7.4780 ms/op 4.0524 ms/op 1.85
byteArrayEquals 32 - random bytes 5.2150 ns/op 4.8870 ns/op 1.07
Buffer.compare 32 - random bytes 17.538 ns/op 15.727 ns/op 1.12
byteArrayEquals 1024 - random bytes 5.9110 ns/op 4.8540 ns/op 1.22
Buffer.compare 1024 - random bytes 17.716 ns/op 16.095 ns/op 1.10
byteArrayEquals 16384 - random bytes 6.8370 ns/op 4.9580 ns/op 1.38
Buffer.compare 16384 - random bytes 17.749 ns/op 15.954 ns/op 1.11
byteArrayEquals 123687377 - random bytes 8.0000 ns/op 7.9600 ns/op 1.01
Buffer.compare 123687377 - random bytes 18.930 ns/op 18.890 ns/op 1.00
regular array get 100000 times 33.757 us/op 31.168 us/op 1.08
wrappedArray get 100000 times 33.765 us/op 30.222 us/op 1.12
arrayWithProxy get 100000 times 12.449 ms/op 8.7653 ms/op 1.42
ssz.Root.equals 47.318 ns/op 43.216 ns/op 1.09
byteArrayEquals 46.281 ns/op 41.874 ns/op 1.11
Buffer.compare 10.584 ns/op 9.3910 ns/op 1.13
processSlot - 1 slots 12.589 us/op 8.8880 us/op 1.42
processSlot - 32 slots 2.6220 ms/op 2.6615 ms/op 0.99
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 3.1317 ms/op 2.5902 ms/op 1.21
getCommitteeAssignments - req 1 vs - 250000 vc 2.1629 ms/op 1.8308 ms/op 1.18
getCommitteeAssignments - req 100 vs - 250000 vc 4.1694 ms/op 3.6604 ms/op 1.14
getCommitteeAssignments - req 1000 vs - 250000 vc 4.4412 ms/op 3.8842 ms/op 1.14
findModifiedValidators - 10000 modified validators 716.35 ms/op 780.77 ms/op 0.92
findModifiedValidators - 1000 modified validators 697.55 ms/op 682.60 ms/op 1.02
findModifiedValidators - 100 modified validators 264.49 ms/op 209.07 ms/op 1.27
findModifiedValidators - 10 modified validators 169.69 ms/op 132.05 ms/op 1.29
findModifiedValidators - 1 modified validators 171.55 ms/op 142.73 ms/op 1.20
findModifiedValidators - no difference 194.72 ms/op 136.86 ms/op 1.42
compare ViewDUs 6.3452 s/op 6.2073 s/op 1.02
compare each validator Uint8Array 2.0707 s/op 1.2054 s/op 1.72
compare ViewDU to Uint8Array 996.81 ms/op 768.72 ms/op 1.30
migrate state 1000000 validators, 24 modified, 0 new 882.02 ms/op 812.61 ms/op 1.09
migrate state 1000000 validators, 1700 modified, 1000 new 1.1733 s/op 1.0226 s/op 1.15
migrate state 1000000 validators, 3400 modified, 2000 new 1.5263 s/op 1.2979 s/op 1.18
migrate state 1500000 validators, 24 modified, 0 new 1.0256 s/op 800.93 ms/op 1.28
migrate state 1500000 validators, 1700 modified, 1000 new 1.2203 s/op 1.0181 s/op 1.20
migrate state 1500000 validators, 3400 modified, 2000 new 1.3960 s/op 1.3727 s/op 1.02
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 4.5400 ns/op 6.0000 ns/op 0.76
state getBlockRootAtSlot - 250000 vs - 7PWei 1.2082 us/op 522.30 ns/op 2.31
naive computeProposerIndex 100000 validators 55.515 ms/op 41.141 ms/op 1.35
computeProposerIndex 100000 validators 1.8007 ms/op 1.3431 ms/op 1.34
naiveGetNextSyncCommitteeIndices 1000 validators 12.218 s/op 5.9997 s/op 2.04
getNextSyncCommitteeIndices 1000 validators 167.04 ms/op 95.847 ms/op 1.74
naiveGetNextSyncCommitteeIndices 10000 validators 12.336 s/op 5.7910 s/op 2.13
getNextSyncCommitteeIndices 10000 validators 163.79 ms/op 96.511 ms/op 1.70
naiveGetNextSyncCommitteeIndices 100000 validators 11.602 s/op 5.9189 s/op 1.96
getNextSyncCommitteeIndices 100000 validators 120.45 ms/op 102.78 ms/op 1.17
naive computeShuffledIndex 100000 validators 24.588 s/op 20.185 s/op 1.22
cached computeShuffledIndex 100000 validators 568.19 ms/op 484.68 ms/op 1.17
naive computeShuffledIndex 2000000 validators 538.24 s/op 417.19 s/op 1.29
cached computeShuffledIndex 2000000 validators 36.901 s/op 15.769 s/op 2.34
computeProposers - vc 250000 606.42 us/op 520.12 us/op 1.17
computeEpochShuffling - vc 250000 42.292 ms/op 36.222 ms/op 1.17
getNextSyncCommittee - vc 250000 10.614 ms/op 8.7499 ms/op 1.21
computeSigningRoot for AttestationData 20.282 us/op 17.877 us/op 1.13
hash AttestationData serialized data then Buffer.toString(base64) 1.5540 us/op 1.1844 us/op 1.31
toHexString serialized data 1.1349 us/op 952.50 ns/op 1.19
Buffer.toString(base64) 139.51 ns/op 100.80 ns/op 1.38
nodejs block root to RootHex using toHex 150.97 ns/op 110.84 ns/op 1.36
nodejs block root to RootHex using toRootHex 87.909 ns/op 75.092 ns/op 1.17
browser block root to RootHex using the deprecated toHexString 212.52 ns/op 186.19 ns/op 1.14
browser block root to RootHex using toHex 170.79 ns/op 158.22 ns/op 1.08
browser block root to RootHex using toRootHex 162.07 ns/op 145.29 ns/op 1.12

by benchmarkbot/action

@twoeths
Copy link
Contributor Author

twoeths commented Aug 12, 2025

with a regular node, we subscribed to only 8 columns:


2025-08-12 14:19:26.652 | Aug-12 07:19:26.575[network]       verbose: Subscribe to gossipsub topic topic=/eth2/132d87f6/data_column_sidecar_101/ssz_snappy |  
-- | -- | --
  |   | 2025-08-12 14:19:26.652 | Aug-12 07:19:26.574[network]       verbose: Subscribe to gossipsub topic topic=/eth2/132d87f6/data_column_sidecar_83/ssz_snappy |  
  |   | 2025-08-12 14:19:26.652 | Aug-12 07:19:26.574[network]       verbose: Subscribe to gossipsub topic topic=/eth2/132d87f6/data_column_sidecar_62/ssz_snappy |  
  |   | 2025-08-12 14:19:26.652 | Aug-12 07:19:26.574[network]       verbose: Subscribe to gossipsub topic topic=/eth2/132d87f6/data_column_sidecar_56/ssz_snappy |  
  |   | 2025-08-12 14:19:26.652 | Aug-12 07:19:26.573[network]       verbose: Subscribe to gossipsub topic topic=/eth2/132d87f6/data_column_sidecar_51/ssz_snappy |  
  |   | 2025-08-12 14:19:26.652 | Aug-12 07:19:26.573[network]       verbose: Subscribe to gossipsub topic topic=/eth2/132d87f6/data_column_sidecar_42/ssz_snappy |  
  |   | 2025-08-12 14:19:26.652 | Aug-12 07:19:26.573[network]       verbose: Subscribe to gossipsub topic topic=/eth2/132d87f6/data_column_sidecar_40/ssz_snappy |  
  |   | 2025-08-12 14:19:26.652 | Aug-12 07:19:26.572[network]       verbose: Subscribe to gossipsub topic topic=/eth2/132d87f6/data_column_sidecar_30/ssz_snappy


@twoeths
Copy link
Contributor Author

twoeths commented Aug 12, 2025

for each slot we only need to receive/validate 8 DataColumnSidecar


2025-08-12 14:29:36.787 | Aug-12 07:29:36.787[network]         debug: Received gossip dataColumn slot=28648, root=0x4cfe…4de3, currentSlot=28648, peerId=16Uiu2HAmLF5Y9LjjCwv4rawA2nE8Pm7Nn72SkS9Li9fsuz7tNL4N, delaySec=0.7360000610351562, gossipSubnet=101, columnIndex=101, pending=null, haveColumns=8, expectedColumns=8, recvToValLatency=0.01399993896484375, recvToValidation=0.05099987983703613, validationTime=0.03699994087219238 |  
-- | -- | --
  |   | 2025-08-12 14:29:36.780 | Aug-12 07:29:36.780[network]         debug: Received gossip dataColumn slot=28648, root=0x4cfe…4de3, currentSlot=28648, peerId=16Uiu2HAmFazaaFBw2pWMjfs3kuxYJpwK4yi1p8qVKvHKuyX51N7m, delaySec=0.7249999046325684, gossipSubnet=56, columnIndex=56, pending=data_column, haveColumns=7, expectedColumns=8, recvToValLatency=0.00800013542175293, recvToValidation=0.05500006675720215, validationTime=0.04699993133544922 |  
  |   | 2025-08-12 14:29:36.774 | Aug-12 07:29:36.774[network]         debug: Received gossip dataColumn slot=28648, root=0x4cfe…4de3, currentSlot=28648, peerId=16Uiu2HAmLF5Y9LjjCwv4rawA2nE8Pm7Nn72SkS9Li9fsuz7tNL4N, delaySec=0.7079999446868896, gossipSubnet=30, columnIndex=30, pending=data_column, haveColumns=6, expectedColumns=8, recvToValLatency=0.02500009536743164, recvToValidation=0.06599998474121094, validationTime=0.0409998893737793 |  
  |   | 2025-08-12 14:29:36.768 | Aug-12 07:29:36.768[network]         debug: Received gossip dataColumn slot=28648, root=0x4cfe…4de3, currentSlot=28648, peerId=16Uiu2HAmFazaaFBw2pWMjfs3kuxYJpwK4yi1p8qVKvHKuyX51N7m, delaySec=0.6789999008178711, gossipSubnet=51, columnIndex=51, pending=data_column, haveColumns=5, expectedColumns=8, recvToValLatency=0.051000118255615234, recvToValidation=0.08899998664855957, validationTime=0.037999868392944336 |  
  |   | 2025-08-12 14:29:36.759 | Aug-12 07:29:36.759[network]         debug: Received gossip dataColumn slot=28648, root=0x4cfe…4de3, currentSlot=28648, peerId=16Uiu2HAmCH8RBkiq9PTTvKkGXfJkSjFPYz4hFMNqYdZcgVL4iNdM, delaySec=0.6740000247955322, gossipSubnet=83, columnIndex=83, pending=data_column, haveColumns=4, expectedColumns=8, recvToValLatency=0.0559999942779541, recvToValidation=0.08500003814697266, validationTime=0.029000043869018555 |  
  |   | 2025-08-12 14:29:36.749 | Aug-12 07:29:36.749[network]         debug: Received gossip dataColumn slot=28648, root=0x4cfe…4de3, currentSlot=28648, peerId=16Uiu2HAmRgBTEBFN3o9dwwUsKMJopT3TXeFxrpcHR9R4wSLKz1fh, delaySec=0.6679999828338623, gossipSubnet=42, columnIndex=42, pending=data_column, haveColumns=3, expectedColumns=8, recvToValLatency=0.06200003623962402, recvToValidation=0.08100008964538574, validationTime=0.01900005340576172 |  
  |   | 2025-08-12 14:29:36.649 | Aug-12 07:29:36.649[network]         debug: Received gossip dataColumn slot=28648, root=0x4cfe…4de3, currentSlot=28648, peerId=16Uiu2HAmLF5Y9LjjCwv4rawA2nE8Pm7Nn72SkS9Li9fsuz7tNL4N, delaySec=0.6389999389648438, gossipSubnet=40, columnIndex=40, pending=block, haveColumns=2, expectedColumns=null, recvToValLatency=0.002000093460083008, recvToValidation=0.009999990463256836, validationTime=0.007999897003173828 |  
  |   | 2025-08-12 14:29:36.589 | Aug-12 07:29:36.589[network]         debug: Received gossip dataColumn slot=28648, root=0x4cfe…4de3, currentSlot=28648, peerId=16Uiu2HAmLF5Y9LjjCwv4rawA2nE8Pm7Nn72SkS9Li9fsuz7tNL4N, delaySec=0.5799999237060547, gossipSubnet=62, columnIndex=62, pending=block, haveColumns=1, expectedColumns=null, recvToValLatency=0.0010001659393310547, recvToValidation=0.009000062942504883, validationTime=0.007999897003173828

verbose: Block processed slot=28648, root=0x4cfe3cc6ff33efc3684500ebf8e2e796cdfdc241079a7c3e62be21a9b2494de3, delaySec=0.7920000553131104


@twoeths twoeths marked this pull request as ready for review August 12, 2025 07:31
@twoeths twoeths requested a review from a team as a code owner August 12, 2025 07:31
@nflaig
Copy link
Member

nflaig commented Aug 12, 2025

with a regular node, we subscribed to only 8 columns:

shouldn't a node without validators attached just have to custody 4 groups/columns?

CUSTODY_REQUIREMENT: 4,
VALIDATOR_CUSTODY_REQUIREMENT: 8,

@twoeths
Copy link
Contributor Author

twoeths commented Aug 12, 2025

with a regular node, we subscribed to only 8 columns:

shouldn't a node without validators attached just have to custody 4 groups/columns?

CUSTODY_REQUIREMENT: 4,
VALIDATOR_CUSTODY_REQUIREMENT: 8,

with SAMPLES_PER_SLOT=8 we should at least get these 8 DataColumnSidecars accordingly in order to say data is available
so we have to subscribe to 8 subnets at least

Copy link
Member

@nflaig nflaig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also need to handle the case if are custody requirement increases

this.custodyConfig.updateTargetCustodyGroupCount(targetCustodyGroupCount);

// sent peers per topic are logged in network.publishGossip(), here we only track metrics for it
// starting from fulu, we have to push to 128 subnets so need to make sure we have enough sent peers per topic
// + 1 because we publish to beacon_block first
if (sentPeersArr.length < dataColumnSidecars.length + 1) {
Copy link
Member

@nflaig nflaig Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this is too strict, technically if there are supernodes in the network it should be sufficient to just publish >=64 columns as they have to reconstruct and re-gossip the missing columns

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a tiny chance that we reach non-supernodes first before supernodes
also the recover_matrix is optional, a new client can join the network and don't implement that at all

I don't see the benefit of not sending to all columns, and block proposal is rarely happen
also as a block proposer, we want to make sure our block is not reorged because data is so slow to become available

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the benefit of not sending to all columns

we should ideally do that just wondering if throwing an error here if it's <128 is good, could just error if it's <64

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it should always be an array with at least 129 items, first one for beacon_block and remaining ones for DataColumnSidecars
it's an issue if I hard coded it as 129 (or 128 + 1), here I only compared with dataColumnSidecars.length + 1) which should be fine. Notice that for each DataColumnSidecar we send to a topic at line 265 above

it should never reach this line

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think warn for less than 128 and error if less than 64

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if sentPeersArr is tracking what we think it will so wanted to bring it up just in case. There may be some peers we send multiple things to and there is no guarantee that the peer distribution (128 peer means all 128 were on different columns subnets) is accurately represented.

We may need to do something like

...dataColumnSidecars.map((dataColumnSidecar) => () => network.publishDataColumnSidecar(dataColumnSidecar).then(res => /* parse res for peer and track it relative to  dataColumnSidecar.index */))

so that we know which columns were sent to which peers and can be sure that each column was actually published. Also, don't we get an error if there are no peers subscribed to a subnet that we publish to? It could be that we do not have a peer for a given subnet, right?

Copy link
Member

@wemeetagain wemeetagain Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't we get an error if there are no peers subscribed to a subnet that we publish to? It could be that we do not have a peer for a given subnet, right?

Yeah, currently configured js-libp2p-gossipsub is to throw if there are no connected peers listening on a topic.

We should not rely on optional behavior here (reconstruction). Imo just kiss, error if we didn't publish to all.

Based on the current behavior, we can remove this conditional (which will never be executed anyways since the promiseAllMaybeAsync call will reject/throw when something is published w/o peers on that topic.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the current behavior, we can remove this conditional (which will never be executed anyways since the promiseAllMaybeAsync call will reject/throw when something is published w/o peers on that topic.)

agree, removing will avoid all the confusion. But this shows a concern that if there are 1/2 topics with 0 peers it'll throw but the overall publish is still a success because supernode will rebuild all columns and publish for us

I think we should allow publishing to 0 peers in this case in order to track this sentPeersPerSubnet metric

@@ -244,13 +244,15 @@ export function getCoreTopicsAtFork(
// After fulu also track data_column_sidecar_{index}
if (ForkSeq[fork] >= ForkSeq.fulu) {
// TODO: @matthewkeil check if this needs to be updated for custody groups
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can be removed

for (let i = 0; i < dataColumnSidecars.length; i++) {
// + 1 because we publish to beacon_block first
const sentPeers = sentPeersArr[i + 1] as number;
metrics?.dataColumns.sentPeersPerSubnet.observe(sentPeers);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we track this metric before the error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the error should never happen as commented

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the send promise error if there is not a peer subscribed to a column topic?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it depends on allowPublishToZeroTopicPeers option
in the last commit I set it to true so it'll not throw error and return 0 sent peer in that case

@wemeetagain wemeetagain merged commit 7e739c1 into unstable Aug 13, 2025
32 of 35 checks passed
@wemeetagain wemeetagain deleted the te/peerDAS_do_not_subscribe_to_all_subnets_3 branch August 13, 2025 11:16
wemeetagain added a commit that referenced this pull request Aug 13, 2025
**Motivation**

- followup to #8181 

**Description**

- If any columns were published to 0 peers, print a warning that the
block may be reorged

---------

Co-authored-by: Nico Flaig <nflaig@protonmail.com>
@wemeetagain
Copy link
Member

🎉 This PR is included in v1.34.0 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Dynamically subscribe to data column subnets

4 participants