Skip to content

Conversation

@wolf31o2
Copy link
Member

@wolf31o2 wolf31o2 commented Nov 18, 2025

Summary by CodeRabbit

  • Tests

    • Added comprehensive performance benchmarks for CBOR encoding/decoding and block serialization/deserialization across all ledger eras (Byron, Shelley, Allegra, Mary, Alonzo, Babbage, Conway).
    • Added microbenchmarks for address and transaction (de)serialization paths.
    • Added targeted benchmarks for decode/encode hot paths.
  • New Features

    • Added Mary-era transaction CBOR round-trip test and Mary transaction type validation.

@wolf31o2 wolf31o2 requested a review from a team as a code owner November 18, 2025 00:20
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 18, 2025

📝 Walkthrough

Walkthrough

Adds numerous benchmark tests for CBOR encode/decode and ledger-era block/transaction serialization paths (Allegra, Alonzo, Babbage, Byron, Conway, Mary, Shelley) and address parsing; adds micro-benchmarks for generic CBOR encode/decode; and introduces a Mary transaction CBOR round-trip test and a Mary decode unit test.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~15 minutes

  • Rationale: Changes are largely repetitive benchmark additions and a small unit test following consistent patterns across many files.
  • Areas requiring attention:
    • Correctness of CBOR sample data decoding (hex → bytes) and that errors are handled/fatal where appropriate.
    • Consistent use of timing (b.ResetTimer/b.StopTimer) and avoiding work in timed sections (e.g., pre-decoding before loop where intended).
    • Era-specific constructors/unmarshal functions used in ledger/tx_test.go and the new Mary round‑trip/test for byte-for-byte equivalence.
    • Any newly imported packages (e.g., encoding/hex) are used and correctly added to imports.

Possibly related PRs

  • blinklabs-io/gouroboros PR 1249 — overlapping additions of era-specific transaction benchmarks and transaction-type determination tests in ledger/tx_test.go.
  • blinklabs-io/gouroboros PR 1090 — related Alonzo CBOR tests and (de)serialization support that interact with Alonzo benchmarks added here.
  • blinklabs-io/gouroboros PR 1093 — related Mary-era CBOR round-trip behavior and Mary transaction body/MarshalCBOR changes enabling identical round-trips.

Pre-merge checks and finishing touches

✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'test: create more benchmarks for serde' directly matches the changeset, which adds multiple benchmark functions across test files for serialization and deserialization operations.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch test/bench-more

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
cbor/decode_test.go (1)

305-315: Benchmark is well-structured.

The benchmark properly uses b.ResetTimer() and follows Go benchmarking best practices. One minor consideration: the dest variable is reused across iterations, which could theoretically introduce noise if previous iterations mutate state. Creating a fresh variable per iteration would be slightly more accurate, though for this simple payload it's unlikely to matter.

Optional refinement:

 func BenchmarkDecode(b *testing.B) {
 	cborData, _ := hex.DecodeString("83010203")
-	var dest any
 	b.ResetTimer()
 	for i := 0; i < b.N; i++ {
+		var dest any
 		_, err := cbor.Decode(cborData, &dest)
 		if err != nil {
 			b.Fatal(err)
 		}
 	}
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fd01888 and 35b6e45.

📒 Files selected for processing (11)
  • cbor/decode_test.go (1 hunks)
  • cbor/encode_test.go (1 hunks)
  • ledger/allegra/block_test.go (1 hunks)
  • ledger/alonzo/block_test.go (1 hunks)
  • ledger/babbage/block_test.go (1 hunks)
  • ledger/byron/block_test.go (1 hunks)
  • ledger/common/address_test.go (2 hunks)
  • ledger/conway/block_test.go (1 hunks)
  • ledger/mary/block_test.go (1 hunks)
  • ledger/shelley/block_test.go (1 hunks)
  • ledger/tx_test.go (4 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: wolf31o2
Repo: blinklabs-io/gouroboros PR: 1093
File: ledger/mary/pparams.go:143-150
Timestamp: 2025-11-04T15:54:22.683Z
Learning: In the blinklabs-io/gouroboros repository, the design goal for CBOR round-trip tests is to achieve byte-identical encoding WITHOUT using stored CBOR (cbor.DecodeStoreCbor). Instead, the approach uses proper field types (pointers for optional fields) and relies on the cbor package's deterministic encoding (SortCoreDeterministic) to ensure reproducible output. The stored CBOR pattern should not be suggested as a solution for round-trip fidelity in this codebase.
📚 Learning: 2025-11-04T15:54:22.683Z
Learnt from: wolf31o2
Repo: blinklabs-io/gouroboros PR: 1093
File: ledger/mary/pparams.go:143-150
Timestamp: 2025-11-04T15:54:22.683Z
Learning: In the blinklabs-io/gouroboros repository, the design goal for CBOR round-trip tests is to achieve byte-identical encoding WITHOUT using stored CBOR (cbor.DecodeStoreCbor). Instead, the approach uses proper field types (pointers for optional fields) and relies on the cbor package's deterministic encoding (SortCoreDeterministic) to ensure reproducible output. The stored CBOR pattern should not be suggested as a solution for round-trip fidelity in this codebase.

Applied to files:

  • cbor/encode_test.go
  • ledger/byron/block_test.go
  • ledger/mary/block_test.go
  • ledger/babbage/block_test.go
  • ledger/shelley/block_test.go
  • ledger/alonzo/block_test.go
  • ledger/conway/block_test.go
  • ledger/allegra/block_test.go
  • ledger/tx_test.go
  • cbor/decode_test.go
🧬 Code graph analysis (8)
cbor/encode_test.go (1)
cbor/encode.go (1)
  • Encode (27-40)
ledger/byron/block_test.go (1)
ledger/byron/byron.go (2)
  • ByronMainBlock (734-740)
  • ByronMainBlock (753-755)
ledger/mary/block_test.go (1)
ledger/mary/mary.go (2)
  • MaryBlock (49-56)
  • MaryBlock (80-82)
ledger/common/address_test.go (2)
ledger/common/address.go (2)
  • NewAddressFromBytes (99-105)
  • NewAddress (73-96)
ledger/compat.go (1)
  • NewAddress (51-51)
ledger/babbage/block_test.go (1)
ledger/babbage/babbage.go (2)
  • BabbageBlock (52-60)
  • BabbageBlock (73-75)
ledger/alonzo/block_test.go (1)
ledger/alonzo/alonzo.go (2)
  • AlonzoBlock (52-60)
  • AlonzoBlock (113-115)
ledger/tx_test.go (2)
ledger/shelley/shelley.go (1)
  • NewShelleyTransactionFromCbor (750-756)
ledger/alonzo/alonzo.go (1)
  • NewAlonzoTransactionFromCbor (856-862)
cbor/decode_test.go (1)
cbor/decode.go (1)
  • Decode (29-44)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (go)
🔇 Additional comments (14)
cbor/decode_test.go (1)

275-303: LGTM! Good test coverage for Mary transaction decoding.

The test appropriately validates the CBOR structure by checking it decodes to an array with the expected length.

cbor/encode_test.go (1)

94-103: LGTM! Clean encoding benchmark.

The benchmark follows Go best practices: object is created once outside the timer, and encoding is measured repeatedly. Error handling is appropriate.

ledger/alonzo/block_test.go (1)

157-180: LGTM! Well-structured benchmarks for Alonzo block CBOR handling.

Both benchmarks follow the correct pattern:

  • Deserialization benchmark creates a fresh block object each iteration to measure unmarshaling performance
  • Serialization benchmark unmarshals once, then measures repeated Cbor() calls

This approach accurately isolates what each benchmark is measuring.

ledger/allegra/block_test.go (1)

175-198: LGTM! Consistent benchmark implementation for Allegra blocks.

The benchmarks follow the same well-structured pattern as the other era benchmarks, properly isolating deserialization and serialization performance measurements.

ledger/mary/block_test.go (1)

152-175: LGTM! Mary block benchmarks follow the established pattern.

The benchmarks are consistent with the other era implementations and properly measure deserialization and serialization performance independently.

ledger/common/address_test.go (2)

18-18: LGTM! Added import for benchmark functionality.


388-408: LGTM! Well-structured address parsing benchmarks.

Both benchmarks follow Go best practices:

  • BenchmarkAddressFromBytes: hex decoding happens once before the timer, then repeatedly benchmarks address creation from bytes
  • BenchmarkAddressFromString: directly benchmarks the string parsing path

These complement the block benchmarks nicely and provide performance insights into address operations.

ledger/shelley/block_test.go (1)

143-166: LGTM! Shelley block benchmarks maintain consistency.

The benchmarks follow the same proven pattern as the other era implementations, enabling consistent performance comparisons across different Cardano eras.

ledger/babbage/block_test.go (1)

181-204: LGTM! Babbage block benchmarks complete the era coverage.

The benchmarks maintain the consistent pattern established across all era implementations. This comprehensive benchmark suite will provide valuable performance insights for CBOR serialization/deserialization across all Cardano ledger eras.

ledger/byron/block_test.go (1)

173-196: LGTM! Benchmark implementations follow the standard pattern.

The benchmarks properly separate deserialization (repeated unmarshal) from serialization (single unmarshal, repeated Cbor() call). Since ByronMainBlock embeds cbor.DecodeStoreCbor, the serialization benchmark measures retrieval of stored CBOR bytes rather than re-encoding from fields, which is the expected behavior for Byron blocks.

ledger/conway/block_test.go (1)

193-216: LGTM! Benchmark implementations are correct.

The benchmarks properly measure deserialization (repeated UnmarshalCBOR) and serialization (single unmarshal followed by repeated Cbor() calls) performance. The pattern is consistent with other era benchmarks in this PR.

ledger/tx_test.go (3)

25-34: LGTM! Transaction test data is well-organized.

The constants are clearly defined with helpful comments indicating the source and purpose of specific transactions.


52-56: LGTM! Mary transaction test case correctly added.

The test properly verifies that Mary transactions are identified as Shelley-compatible (type 1).


183-296: LGTM! Comprehensive benchmark coverage across all eras.

The benchmarks properly measure both deserialization (New<Era>TransactionFromCbor) and serialization (Cbor()) performance for Shelley, Allegra, Mary, Alonzo, and Babbage eras. The consistent pattern makes the suite easy to maintain.

Signed-off-by: Chris Gianelloni <wolf31o2@blinklabs.io>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 35b6e45 and eced64a.

📒 Files selected for processing (11)
  • cbor/decode_test.go (1 hunks)
  • cbor/encode_test.go (1 hunks)
  • ledger/allegra/block_test.go (1 hunks)
  • ledger/alonzo/block_test.go (1 hunks)
  • ledger/babbage/block_test.go (1 hunks)
  • ledger/byron/block_test.go (1 hunks)
  • ledger/common/address_test.go (2 hunks)
  • ledger/conway/block_test.go (1 hunks)
  • ledger/mary/block_test.go (1 hunks)
  • ledger/shelley/block_test.go (1 hunks)
  • ledger/tx_test.go (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • ledger/alonzo/block_test.go
  • ledger/allegra/block_test.go
  • ledger/conway/block_test.go
  • ledger/common/address_test.go
  • ledger/babbage/block_test.go
  • cbor/decode_test.go
  • cbor/encode_test.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-04T15:54:22.683Z
Learnt from: wolf31o2
Repo: blinklabs-io/gouroboros PR: 1093
File: ledger/mary/pparams.go:143-150
Timestamp: 2025-11-04T15:54:22.683Z
Learning: In the blinklabs-io/gouroboros repository, the design goal for CBOR round-trip tests is to achieve byte-identical encoding WITHOUT using stored CBOR (cbor.DecodeStoreCbor). Instead, the approach uses proper field types (pointers for optional fields) and relies on the cbor package's deterministic encoding (SortCoreDeterministic) to ensure reproducible output. The stored CBOR pattern should not be suggested as a solution for round-trip fidelity in this codebase.

Applied to files:

  • ledger/shelley/block_test.go
  • ledger/byron/block_test.go
  • ledger/mary/block_test.go
  • ledger/tx_test.go
🧬 Code graph analysis (3)
ledger/byron/block_test.go (1)
ledger/byron/byron.go (2)
  • ByronMainBlock (734-740)
  • ByronMainBlock (753-755)
ledger/mary/block_test.go (1)
ledger/mary/mary.go (2)
  • MaryBlock (49-56)
  • MaryBlock (80-82)
ledger/tx_test.go (2)
ledger/shelley/shelley.go (1)
  • NewShelleyTransactionFromCbor (750-756)
ledger/alonzo/alonzo.go (1)
  • NewAlonzoTransactionFromCbor (856-862)
🔇 Additional comments (7)
ledger/mary/block_test.go (2)

152-162: LGTM!

The deserialization benchmark follows the established pattern used across other era block tests, with proper timer reset and error handling.


164-175: LGTM!

The serialization benchmark correctly sets up the block once before the timer and repeatedly measures only the serialization operation.

ledger/tx_test.go (4)

18-18: LGTM!

The bytes import and transaction hex constants are properly added to support the new round-trip test and era-specific benchmarks.

Also applies to: 26-33


53-57: LGTM!

The Mary transaction test case correctly identifies the transaction type as 1 (Shelley-compatible).


80-111: LGTM! Past review comment has been addressed.

The round-trip test now correctly verifies byte-identical encoding using bytes.Equal at line 99, which aligns with the design goal of achieving byte-for-byte fidelity without stored CBOR.


186-299: LGTM!

All transaction benchmarks (Shelley, Allegra, Mary, Alonzo, Babbage) follow the established pattern with proper timer resets and error handling. The deserialization benchmarks measure repeated unmarshaling, while serialization benchmarks measure repeated encoding after a single setup.

ledger/byron/block_test.go (1)

185-196: The serialization benchmark does not test actual encoding.

Since ByronMainBlock embeds cbor.DecodeStoreCbor, the Cbor() method (inherited from DecodeStoreCbor at cbor/cbor.go:69-71) simply returns the stored CBOR bytes set during UnmarshalCBOR(). The benchmark measures memory access performance rather than serialization performance, making the name "BenchmarkByronBlockSerialization" misleading.

To benchmark actual encoding, the test should call a method that re-encodes the block (e.g., marshaling the individual fields), or remove the stored CBOR before measuring.

Comment on lines +173 to +183
func BenchmarkByronBlockDeserialization(b *testing.B) {
blockCbor, _ := hex.DecodeString(byronBlockHex)
b.ResetTimer()
for i := 0; i < b.N; i++ {
var block byron.ByronMainBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle the error from hex.DecodeString during setup.

The error from hex.DecodeString is ignored on line 174. If decoding fails, blockCbor will be nil or empty, causing the benchmark to run with invalid data and produce misleading results or panics.

Apply this diff to properly handle the setup error:

 func BenchmarkByronBlockDeserialization(b *testing.B) {
-	blockCbor, _ := hex.DecodeString(byronBlockHex)
+	blockCbor, err := hex.DecodeString(byronBlockHex)
+	if err != nil {
+		b.Fatal(err)
+	}
 	b.ResetTimer()
 	for i := 0; i < b.N; i++ {
 		var block byron.ByronMainBlock
 		err := block.UnmarshalCBOR(blockCbor)
 		if err != nil {
 			b.Fatal(err)
 		}
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func BenchmarkByronBlockDeserialization(b *testing.B) {
blockCbor, _ := hex.DecodeString(byronBlockHex)
b.ResetTimer()
for i := 0; i < b.N; i++ {
var block byron.ByronMainBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkByronBlockDeserialization(b *testing.B) {
blockCbor, err := hex.DecodeString(byronBlockHex)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
var block byron.ByronMainBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
}
}
🤖 Prompt for AI Agents
In ledger/byron/block_test.go around lines 173 to 183, the error returned by
hex.DecodeString is currently ignored; change the setup to capture the error,
check it immediately after calling hex.DecodeString, and call b.Fatalf (or
b.Fatal) with the error if non-nil so the benchmark aborts on invalid test data;
ensure this check occurs before b.ResetTimer() so the benchmark only runs when
blockCbor is valid.

Comment on lines +185 to +196
func BenchmarkByronBlockSerialization(b *testing.B) {
blockCbor, _ := hex.DecodeString(byronBlockHex)
var block byron.ByronMainBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = block.Cbor()
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle the error from hex.DecodeString during setup.

The error from hex.DecodeString is ignored on line 186. If decoding fails, the subsequent UnmarshalCBOR call will fail or operate on invalid data, causing the benchmark to not run correctly.

Apply this diff to properly handle the setup error:

 func BenchmarkByronBlockSerialization(b *testing.B) {
-	blockCbor, _ := hex.DecodeString(byronBlockHex)
+	blockCbor, err := hex.DecodeString(byronBlockHex)
+	if err != nil {
+		b.Fatal(err)
+	}
 	var block byron.ByronMainBlock
 	err := block.UnmarshalCBOR(blockCbor)
 	if err != nil {
 		b.Fatal(err)
 	}
 	b.ResetTimer()
 	for i := 0; i < b.N; i++ {
 		_ = block.Cbor()
 	}
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In ledger/byron/block_test.go around lines 185 to 196, the error returned by
hex.DecodeString is currently ignored; capture its returned error, check it
immediately after the call, and call b.Fatal or b.Fatalf with the error (and
optional context) to stop the benchmark setup on failure before calling
UnmarshalCBOR.

Comment on lines +143 to +153
func BenchmarkShelleyBlockDeserialization(b *testing.B) {
blockCbor, _ := hex.DecodeString(shelleyBlockHex)
b.ResetTimer()
for i := 0; i < b.N; i++ {
var block shelley.ShelleyBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle the hex decode error in the benchmark.

Line 144 ignores the error from hex.DecodeString. If the hex string is malformed, blockCbor would be empty and the benchmark would measure invalid operations, producing meaningless results. This is inconsistent with the test functions in this file (lines 37-43, 95-98) that properly handle decode errors.

Apply this diff to properly handle the error:

 func BenchmarkShelleyBlockDeserialization(b *testing.B) {
-	blockCbor, _ := hex.DecodeString(shelleyBlockHex)
+	blockCbor, err := hex.DecodeString(shelleyBlockHex)
+	if err != nil {
+		b.Fatal(err)
+	}
 	b.ResetTimer()
 	for i := 0; i < b.N; i++ {
 		var block shelley.ShelleyBlock
 		err := block.UnmarshalCBOR(blockCbor)
 		if err != nil {
 			b.Fatal(err)
 		}
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func BenchmarkShelleyBlockDeserialization(b *testing.B) {
blockCbor, _ := hex.DecodeString(shelleyBlockHex)
b.ResetTimer()
for i := 0; i < b.N; i++ {
var block shelley.ShelleyBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkShelleyBlockDeserialization(b *testing.B) {
blockCbor, err := hex.DecodeString(shelleyBlockHex)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
var block shelley.ShelleyBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
}
}
🤖 Prompt for AI Agents
In ledger/shelley/block_test.go around lines 143 to 153, the call to
hex.DecodeString(shelleyBlockHex) ignores its error which can leave blockCbor
empty and make the benchmark invalid; decode the hex into blockCbor and check
the returned error immediately, calling b.Fatalf or b.Fatal with the error if
non-nil (and do this before b.ResetTimer), then proceed with the loop so the
benchmark only runs when decoding succeeded.

Comment on lines +143 to +166
func BenchmarkShelleyBlockDeserialization(b *testing.B) {
blockCbor, _ := hex.DecodeString(shelleyBlockHex)
b.ResetTimer()
for i := 0; i < b.N; i++ {
var block shelley.ShelleyBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
}
}

func BenchmarkShelleyBlockSerialization(b *testing.B) {
blockCbor, _ := hex.DecodeString(shelleyBlockHex)
var block shelley.ShelleyBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = block.Cbor()
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

Verify if other era benchmarks have the same error-handling issue.

Since the AI summary indicates these benchmarks follow the established pattern from other era benchmarks (Allegra, Byron, Mary, Alonzo, Babbage), it's worth checking if they also ignore hex.DecodeString errors. If so, this should be fixed consistently across all benchmarks.


🏁 Script executed:

#!/bin/bash
# Description: Check if other era benchmark files ignore hex.DecodeString errors

# Search for benchmark functions that ignore hex.DecodeString errors
rg -A 2 'func Benchmark.*(?:Serialization|Deserialization)' --type go -g 'ledger/*/block_test.go' | rg 'blockCbor.*:=.*hex\.DecodeString.*\n.*_'

Length of output: 191


🏁 Script executed:

#!/bin/bash
# Description: Check if other era benchmarks ignore hex.DecodeString errors

# Find all block_test.go files in ledger subdirectories and check for hex.DecodeString error handling
echo "=== Checking all era benchmarks for hex.DecodeString error handling ==="
for testfile in $(fd 'block_test\.go$' ledger/); do
  echo "--- File: $testfile ---"
  # Look for hex.DecodeString calls and show surrounding context
  rg -B 1 -A 1 'hex\.DecodeString' "$testfile" | head -20
done

Length of output: 4022


Fix hex.DecodeString error handling consistently across all era benchmark functions.

Verification confirms this error-handling issue exists in all benchmark functions across every era (Allegra, Alonzo, Babbage, Byron, Conway, Mary, and Shelley). All benchmark deserialization and serialization functions ignore the error returned by hex.DecodeString, while non-benchmark test functions in the same files properly handle these errors. Update all affected benchmark functions in:

  • ledger/allegra/block_test.go
  • ledger/alonzo/block_test.go
  • ledger/babbage/block_test.go
  • ledger/byron/block_test.go
  • ledger/conway/block_test.go
  • ledger/mary/block_test.go
  • ledger/shelley/block_test.go
🤖 Prompt for AI Agents
In ledger/shelley/block_test.go around lines 143-166 (and equivalently in
ledger/allegra/block_test.go, ledger/alonzo/block_test.go,
ledger/babbage/block_test.go, ledger/byron/block_test.go,
ledger/conway/block_test.go, ledger/mary/block_test.go), the call to
hex.DecodeString ignores the returned error in benchmark functions; change each
benchmark to check the error immediately after hex.DecodeString and call
b.Fatal(err) (or b.Fatalf with context) if non-nil so the benchmark fails fast
on invalid hex, and apply the same fix for both deserialization and
serialization benchmark functions in every listed era file.

Comment on lines +155 to +166
func BenchmarkShelleyBlockSerialization(b *testing.B) {
blockCbor, _ := hex.DecodeString(shelleyBlockHex)
var block shelley.ShelleyBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = block.Cbor()
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle the hex decode error in the benchmark.

Line 156 ignores the error from hex.DecodeString. If the hex string is malformed, the subsequent unmarshal would fail and the benchmark would exit early, but it's better to catch decode errors explicitly before attempting to unmarshal.

Apply this diff to properly handle the error:

 func BenchmarkShelleyBlockSerialization(b *testing.B) {
-	blockCbor, _ := hex.DecodeString(shelleyBlockHex)
+	blockCbor, err := hex.DecodeString(shelleyBlockHex)
+	if err != nil {
+		b.Fatal(err)
+	}
 	var block shelley.ShelleyBlock
-	err := block.UnmarshalCBOR(blockCbor)
+	err = block.UnmarshalCBOR(blockCbor)
 	if err != nil {
 		b.Fatal(err)
 	}
 	b.ResetTimer()
 	for i := 0; i < b.N; i++ {
 		_ = block.Cbor()
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func BenchmarkShelleyBlockSerialization(b *testing.B) {
blockCbor, _ := hex.DecodeString(shelleyBlockHex)
var block shelley.ShelleyBlock
err := block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = block.Cbor()
}
}
func BenchmarkShelleyBlockSerialization(b *testing.B) {
blockCbor, err := hex.DecodeString(shelleyBlockHex)
if err != nil {
b.Fatal(err)
}
var block shelley.ShelleyBlock
err = block.UnmarshalCBOR(blockCbor)
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = block.Cbor()
}
}
🤖 Prompt for AI Agents
In ledger/shelley/block_test.go around lines 155 to 166, the call to
hex.DecodeString ignores its error return which can mask malformed hex input;
change the code to capture the decode error (e.g. blockCbor, err :=
hex.DecodeString(shelleyBlockHex)) and call b.Fatal(err) (or b.Fatalf) if err !=
nil before attempting UnmarshalCBOR so the benchmark fails fast with a clear
decode error.

@wolf31o2
Copy link
Member Author

These all came from the chain. If they don't decode, we have a problem. They're all known good.

@wolf31o2 wolf31o2 merged commit 7c3f5ff into main Nov 18, 2025
10 checks passed
@wolf31o2 wolf31o2 deleted the test/bench-more branch November 18, 2025 03:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants