Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] add more broadcast logs to scroll-v5.7.11 #1075

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 26 additions & 8 deletions core/tx_pool.go
Original file line number Diff line number Diff line change
Expand Up @@ -783,13 +783,13 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e
// If the transaction is already known, discard it
hash := tx.Hash()
if pool.all.Get(hash) != nil {
log.Trace("Discarding already known transaction", "hash", hash)
log.Error("Discarding already known transaction", "hash", hash)
knownTxMeter.Mark(1)
Comment on lines +786 to 787
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider using a less severe log level for discarding known transactions.

The log level for discarding already known transactions has been changed from log.Trace to log.Error. While it's important to track discarded transactions, using log.Error for what could be a common occurrence might lead to several issues:

  1. Log pollution: This could flood the error logs, making it harder to identify actual critical errors.
  2. Performance impact: Frequent error logging might have a slight performance overhead.
  3. Misleading severity: Discarding a known transaction is not necessarily an error condition.

Consider using log.Debug or log.Info instead, which would provide visibility without the drawbacks of using the error level.

-		log.Error("Discarding already known transaction", "hash", hash)
+		log.Debug("Discarding already known transaction", "hash", hash)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Error("Discarding already known transaction", "hash", hash)
knownTxMeter.Mark(1)
log.Debug("Discarding already known transaction", "hash", hash)
knownTxMeter.Mark(1)

return false, ErrAlreadyKnown
}

if pool.IsMiner() && rawdb.IsSkippedTransaction(pool.chain.Database(), hash) {
log.Trace("Discarding already known skipped transaction", "hash", hash)
log.Error("Discarding already known skipped transaction", "hash", hash)
knownSkippedTxMeter.Mark(1)
return false, ErrAlreadyKnown
Comment on lines +792 to 794
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Adjust log level for known skipped transactions and consider adding more context.

The addition of a log for discarding known skipped transactions is a good improvement for visibility. The miner-specific check (pool.IsMiner()) is a nice touch, ensuring this log only appears for mining nodes. However, there are a couple of points to consider:

  1. Log Level: Similar to the previous comment, using log.Error might be too severe for this case. Consider using log.Debug or log.Info to avoid potential log pollution.

  2. Context: It might be helpful to add more context about why these transactions are skipped. This could aid in debugging and understanding the system's behavior.

-		log.Error("Discarding already known skipped transaction", "hash", hash)
+		log.Debug("Discarding already known skipped transaction", "hash", hash, "reason", "previously skipped by miner")

Also, consider adding a comment explaining the purpose of this check and logging for future maintainers.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Error("Discarding already known skipped transaction", "hash", hash)
knownSkippedTxMeter.Mark(1)
return false, ErrAlreadyKnown
log.Debug("Discarding already known skipped transaction", "hash", hash, "reason", "previously skipped by miner")
knownSkippedTxMeter.Mark(1)
return false, ErrAlreadyKnown

}
Expand All @@ -800,15 +800,15 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e

// If the transaction fails basic validation, discard it
if err := pool.validateTx(tx, isLocal); err != nil {
log.Trace("Discarding invalid transaction", "hash", hash, "err", err)
log.Error("Discarding invalid transaction", "hash", hash, "err", err)
invalidTxMeter.Mark(1)
return false, err
Comment on lines +803 to 805
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reconsider log level for invalid transactions and add rate limiting.

Changing the log level for invalid transactions from log.Trace to log.Error significantly increases visibility, which can be beneficial for detecting issues. However, this approach might have some drawbacks:

  1. Log Pollution: If there are many invalid transactions, this could flood the error logs.
  2. Performance: Frequent error logging might have a slight performance impact.

While invalid transactions are more concerning than known transactions, not all of them may warrant an error log. Consider the following improvements:

  1. Use log.Warn instead of log.Error for most cases.
  2. Implement a rate limiter for this log to prevent flooding.
  3. Consider categorizing errors and only use log.Error for severe cases.
-		log.Error("Discarding invalid transaction", "hash", hash, "err", err)
+		if isSevereError(err) {
+			log.Error("Discarding invalid transaction", "hash", hash, "err", err)
+		} else {
+			log.Warn("Discarding invalid transaction", "hash", hash, "err", err)
+		}
+		logInvalidTxCounter.Inc(1)

Also, consider adding a rate limiter to this logging call and a function isSevereError to categorize error severity.

Committable suggestion was skipped due to low confidence.

}
// If the transaction pool is full, discard underpriced transactions
if uint64(pool.all.Slots()+numSlots(tx)) > pool.config.GlobalSlots+pool.config.GlobalQueue {
// If the new transaction is underpriced, don't accept it
if !isLocal && pool.priced.Underpriced(tx) {
log.Trace("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
log.Error("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
underpricedTxMeter.Mark(1)
return false, ErrUnderpriced
Comment on lines +811 to 813
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Adjust log level for underpriced transactions and consider aggregating logs.

The changes to this log statement have both positive and potentially problematic aspects:

Positives:

  1. Including gasTipCap and gasFeeCap provides valuable context for understanding why transactions are considered underpriced.

Areas for improvement:

  1. Log Level: Using log.Error for underpriced transactions is likely too severe. This is a normal part of transaction pool management, especially during periods of gas price volatility.
  2. Potential for Log Flooding: In busy networks or during gas price spikes, this could generate an excessive number of log entries.

Suggestions:

  1. Change the log level to log.Debug or log.Info.
  2. Consider implementing an aggregated logging approach for underpriced transactions.
-			log.Error("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
+			log.Debug("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
+			if metrics.Enabled {
+				underpricedTxCounter.Inc(1)
+			}

Also, consider adding a periodic log that summarizes the number of underpriced transactions discarded in the last interval, rather than logging each one individually.

Committable suggestion was skipped due to low confidence.

}
Expand All @@ -817,6 +817,7 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e
// do too many replacements between reorg-runs, so we cap the number of
// replacements to 25% of the slots
if pool.changesSinceReorg > int(pool.config.GlobalSlots/4) {
log.Error("Discarding transaction due to too many changes since reorg", "hash", hash)
throttleTxMeter.Mark(1)
return false, ErrTxPoolOverflow
Comment on lines +820 to 822
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Adjust log level and add context for transactions discarded due to reorg frequency.

The change from log.Trace to log.Error for transactions discarded due to frequent reorgs is an improvement in visibility, but there are some considerations:

  1. Log Level: While this situation is more significant than regular transaction discards, log.Error might still be too severe. Consider using log.Warn instead.

  2. Context: The current message doesn't provide enough information about why this is happening. Adding more context would be beneficial for debugging and monitoring.

  3. Threshold: It would be helpful to log the current threshold and the actual count of changes since the last reorg.

-			log.Error("Discarding transaction due to too many changes since reorg", "hash", hash)
+			log.Warn("Discarding transaction due to high reorg frequency", "hash", hash, "changesSinceReorg", pool.changesSinceReorg, "threshold", int(pool.config.GlobalSlots/4))

Consider also adding a metric to track how often this situation occurs, as it might indicate network instability or other issues that require attention.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Error("Discarding transaction due to too many changes since reorg", "hash", hash)
throttleTxMeter.Mark(1)
return false, ErrTxPoolOverflow
log.Warn("Discarding transaction due to high reorg frequency", "hash", hash, "changesSinceReorg", pool.changesSinceReorg, "threshold", int(pool.config.GlobalSlots/4))
throttleTxMeter.Mark(1)
return false, ErrTxPoolOverflow

}
Expand All @@ -828,52 +829,67 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e

// Special case, we still can't make the room for the new remote one.
if !isLocal && !success {
log.Trace("Discarding overflown transaction", "hash", hash)
log.Error("Discarding overflown transaction", "hash", hash)
overflowedTxMeter.Mark(1)
return false, ErrTxPoolOverflow
Comment on lines +832 to 834
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refine logging for overflown transactions with appropriate level and additional context.

The change from log.Trace to log.Error for overflown transactions increases visibility, but there are opportunities for improvement:

  1. Log Level: While a full transaction pool is significant, it's not necessarily an error condition. Consider using log.Warn instead.

  2. Context: The current message doesn't provide information about the pool's state. Adding details about current capacity and limits would be beneficial for monitoring and debugging.

  3. Frequency: If this condition occurs often, it might lead to log pollution. Consider implementing a rate limiter or aggregating these logs.

-			log.Error("Discarding overflown transaction", "hash", hash)
+			log.Warn("Discarding transaction due to full pool", "hash", hash, "poolSize", pool.all.Count(), "capacity", pool.config.GlobalSlots+pool.config.GlobalQueue)
+			if metrics.Enabled {
+				overflownTxCounter.Inc(1)
+			}

Also, consider adding a periodic summary log that provides an overview of the pool's state and the number of overflown transactions in the last interval. This would provide valuable insights without the risk of log flooding.

Committable suggestion was skipped due to low confidence.

}
// Bump the counter of rejections-since-reorg
pool.changesSinceReorg += len(drop)
// Kick out the underpriced remote transactions.
for _, tx := range drop {
log.Trace("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
log.Error("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
underpricedTxMeter.Mark(1)
pool.removeTx(tx.Hash(), false)
}
Comment on lines +840 to 843
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harmonize logging for underpriced transactions and consider aggregation.

This log statement is very similar to the one we reviewed earlier for underpriced transactions. For consistency and improved logging practices, consider the following:

  1. Log Level: As with the previous underpriced transaction log, log.Error is likely too severe for this normal operation. Use log.Debug or log.Info instead.

  2. Consistency: Ensure that the logging for all underpriced transactions (both fresh and existing) uses the same level and format.

  3. Aggregation: Given that this could occur frequently during gas price fluctuations, consider aggregating these logs to prevent flooding.

  4. Context: The additional gasTipCap and gasFeeCap information is valuable. Consider adding the minimum required values for comparison.

-			log.Error("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
+			log.Debug("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap(), "minTip", pool.gasPrice, "minFeeCap", pool.priced.urgent.baseFee)
+			if metrics.Enabled {
+				underpricedTxCounter.Inc(1)
+			}

Additionally, implement a periodic summary log that provides an overview of underpriced transactions discarded in the last interval, including both fresh and existing ones. This would offer valuable insights without risking log saturation.

Committable suggestion was skipped due to low confidence.

}
// Try to replace an existing transaction in the pending pool
from, _ := types.Sender(pool.signer, tx) // already validated

log.Info("already validated", "hash", hash, "from", from, "to", tx.To())

Comment on lines +847 to +849
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reconsider the verbosity and security implications of new logging statements.

The addition of numerous log.Info statements throughout the code provides increased visibility into the transaction processing flow. While this can be beneficial for debugging and monitoring, there are several concerns to address:

  1. Verbosity: The high frequency of these logs, especially for common operations, could lead to significant log pollution and potential performance impacts.

  2. Log Levels: Many of these new logs might be more appropriate at the log.Debug level, reserving log.Info for more significant events.

  3. Security: Some logs contain detailed transaction information. Ensure that sensitive data is not being over-exposed, especially in production environments.

  4. Performance: Frequent logging can have performance implications, particularly in high-throughput scenarios.

  5. Consistency: Ensure that the logging style and information provided is consistent across similar events.

Suggestions:

  1. Reduce Verbosity: Consider moving most of these new logs to log.Debug level.
  2. Sensitive Information: Review each log to ensure it doesn't expose sensitive data unnecessarily.
  3. Aggregation: For high-frequency events, consider implementing aggregated logging (e.g., summarizing transaction processing stats periodically).
  4. Conditional Logging: Implement a mechanism to enable/disable verbose logging dynamically, allowing detailed logs when needed without impacting normal operation.

Example of reducing verbosity and adding conditional logging:

var verboseLogging = false

func logVerbose(format string, args ...interface{}) {
    if verboseLogging {
        log.Debug(fmt.Sprintf(format, args...))
    }
}

// Then replace log.Info calls with logVerbose:
logVerbose("already validated", "hash", hash, "from", from, "to", tx.To())

This approach allows you to enable detailed logging when needed without impacting normal operation.

Also applies to: 851-853, 856-858, 866-867, 872-872, 877-877, 884-884, 890-890, 892-894, 902-902

if list := pool.pending[from]; list != nil && list.Overlaps(tx) {

log.Info("already pending", "hash", hash, "from", from, "to", tx.To())

// Nonce already pending, check if required price bump is met
inserted, old := list.Add(tx, pool.currentState, pool.config.PriceBump, pool.chainconfig, pool.currentHead)

log.Info("already pending", "hash", hash, "inserted", inserted, "old", old.Hash().Hex())

if !inserted {
log.Error("already pending, ErrReplaceUnderpriced", "hash", hash)
pendingDiscardMeter.Mark(1)
return false, ErrReplaceUnderpriced
}
// New transaction is better, replace old one
if old != nil {
log.Info("already pending, new transaction is better, replace old one", "hash", hash, "old", old.Hash().Hex())
pool.all.Remove(old.Hash())
pool.calculateTxsLifecycle(types.Transactions{old}, time.Now())
pool.priced.Removed(1)
pendingReplaceMeter.Mark(1)
}
log.Info("already pending 1", "hash", hash)
pool.all.Add(tx, isLocal)
pool.priced.Put(tx, isLocal)
pool.journalTx(from, tx)
pool.queueTxEvent(tx)
log.Trace("Pooled new executable transaction", "hash", hash, "from", from, "to", tx.To())
log.Info("Pooled new executable transaction", "hash", hash, "from", from, "to", tx.To())

// Successful promotion, bump the heartbeat
pool.beats[from] = time.Now()
return old != nil, nil
}
// New transaction isn't replacing a pending one, push into queue
log.Info("new transaction isn't replacing a pending one, push into queue", "hash", hash, "from", from, "to", tx.To())
replaced, err = pool.enqueueTx(hash, tx, isLocal, true)
if err != nil {
return false, err
}
// Mark local addresses and journal local transactions
log.Info("mark local addresses and journal local transactions", "hash", hash, "from", from, "to", tx.To())
if local && !pool.locals.contains(from) {
log.Info("Setting new local account", "hash", hash, "from", from, "to", tx.To())
log.Info("Setting new local account", "address", from)
pool.locals.add(from)
pool.priced.Removed(pool.all.RemoteToLocals(pool.locals)) // Migrate the remotes if it's marked as local first time.
Expand All @@ -883,7 +899,7 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e
}
pool.journalTx(from, tx)

log.Trace("Pooled new future transaction", "hash", hash, "from", from, "to", tx.To())
log.Info("Pooled new future transaction", "hash", hash, "from", from, "to", tx.To())
return replaced, nil
}

Expand Down Expand Up @@ -1084,6 +1100,7 @@ func (pool *TxPool) addTxsLocked(txs []*types.Transaction, local bool) ([]error,
replaced, err := pool.add(tx, local)
errs[i] = err
if err == nil && !replaced {
log.Info("dirty.addTx", "hash", tx.Hash())
dirty.addTx(tx)
}
}
Expand Down Expand Up @@ -1262,6 +1279,7 @@ func (pool *TxPool) scheduleReorgLoop() {
} else {
dirtyAccounts.merge(req)
}
// TODO: print dirtyAccounts related in reorg
launchNextRun = true
pool.reorgDoneCh <- nextDone

Expand Down
4 changes: 4 additions & 0 deletions eth/fetcher/tx_fetcher.go
Original file line number Diff line number Diff line change
Expand Up @@ -795,6 +795,10 @@ func (f *TxFetcher) scheduleFetches(timer *mclock.Timer, timeout chan struct{},
})

log.Debug("Scheduling transaction retrieval", "peer", peer, "len(f.announces[peer])", len(f.announces[peer]), "len(hashes)", len(hashes))
for _, hash := range hashes {
log.Info("Scheduling transaction retrieval", "peer", peer, "len(f.announces[peer])", len(f.announces[peer]), "len(hashes)", len(hashes), "hash", hash)
}

Comment on lines +798 to +801
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adjusting log levels and reducing log verbosity

The added logging statements provide valuable information for debugging the transaction retrieval process. However, there are a few considerations:

  1. The log.Info level might be too verbose for production environments, especially when inside a loop that could potentially generate a large number of log entries.
  2. Logging each transaction hash individually could impact performance if there are many transactions being processed.

Consider the following improvements:

  1. Use log.Debug instead of log.Info for detailed per-transaction logs:
-log.Info("Scheduling transaction retrieval", "peer", peer, "len(f.announces[peer])", len(f.announces[peer]), "len(hashes)", len(hashes), "hash", hash)
+log.Debug("Scheduling transaction retrieval", "peer", peer, "len(f.announces[peer])", len(f.announces[peer]), "len(hashes)", len(hashes), "hash", hash)
  1. Add a summary log at the Info level outside the loop:
if len(hashes) > 0 {
    log.Info("Scheduled transaction retrievals", "peer", peer, "announcements", len(f.announces[peer]), "retrievals", len(hashes))
}

These changes will maintain the detailed logging for debugging purposes while reducing the verbosity and potential performance impact in production environments.

peerAnnounceTxsLenGauge.Update(int64(len(f.announces[peer])))
peerRetrievalTxsLenGauge.Update(int64(len(hashes)))

Expand Down
6 changes: 6 additions & 0 deletions eth/handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -519,12 +519,18 @@ func (h *handler) BroadcastTransactions(txs types.Transactions) {
directCount += len(hashes)
peer.AsyncSendTransactions(hashes)
log.Debug("Transactions being broadcasted to", "peer", peer.String(), "len", len(hashes))
for _, hash := range hashes {
log.Info("Transactions being broadcasted to", "peer", peer.String(), "len", len(hashes), "hash", hash)
}
Comment on lines +522 to +524
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider optimizing the new logging statements.

While the additional logging provides valuable information for debugging, there are a few concerns:

  1. Using log.Info for each transaction might be too verbose. Consider using log.Debug instead.
  2. Logging inside loops could impact performance, especially with large transaction sets.
  3. Some information is duplicated from the existing debug logs.

Consider the following improvements:

  1. Change log.Info to log.Debug to reduce verbosity in production.
  2. Move the logging outside the loops and aggregate the information:
 for peer, hashes := range txset {
 	directPeers++
 	directCount += len(hashes)
 	peer.AsyncSendTransactions(hashes)
 	log.Debug("Transactions being broadcasted to", "peer", peer.String(), "len", len(hashes))
-	for _, hash := range hashes {
-		log.Info("Transactions being broadcasted to", "peer", peer.String(), "len", len(hashes), "hash", hash)
-	}
 }
+log.Debug("Transactions broadcast details", "directPeers", directPeers, "directCount", directCount)

 for peer, hashes := range annos {
 	annoPeers++
 	annoCount += len(hashes)
 	peer.AsyncSendPooledTransactionHashes(hashes)
 	log.Debug("Transactions being announced to", "peer", peer.String(), "len", len(hashes))
-	for _, hash := range hashes {
-		log.Info("Transactions being announced to", "peer", peer.String(), "len", len(hashes), "hash", hash)
-	}
 }
+log.Debug("Transactions announcement details", "annoPeers", annoPeers, "annoCount", annoCount)

This approach will provide the necessary information for debugging while minimizing the performance impact and log verbosity.

Also applies to: 531-533

}
for peer, hashes := range annos {
annoPeers++
annoCount += len(hashes)
peer.AsyncSendPooledTransactionHashes(hashes)
log.Debug("Transactions being announced to", "peer", peer.String(), "len", len(hashes))
for _, hash := range hashes {
log.Info("Transactions being announced to", "peer", peer.String(), "len", len(hashes), "hash", hash)
}
}
log.Debug("Transaction broadcast", "txs", len(txs),
"announce packs", annoPeers, "announced hashes", annoCount,
Expand Down
10 changes: 8 additions & 2 deletions eth/protocols/eth/broadcast.go
Original file line number Diff line number Diff line change
Expand Up @@ -106,9 +106,12 @@ func (p *Peer) broadcastTransactions() {
done = make(chan struct{})
go func() {
log.Debug("Sending transactions", "count", len(txs))
for _, tx := range txs {
log.Info("Sending transactions", "count", len(txs), "tx", tx.Hash().Hex())
}
broadcastSendTxsLenGauge.Update(int64(len(txs)))
if err := p.SendTransactions(txs); err != nil {
log.Debug("Sending transactions", "count", len(txs), "err", err)
log.Error("Sending transactions", "count", len(txs), "err", err)
broadcastSendTxsFailMeter.Mark(1)
fail <- err
return
Expand Down Expand Up @@ -181,9 +184,12 @@ func (p *Peer) announceTransactions() {
done = make(chan struct{})
go func() {
log.Debug("Sending transaction announcements", "count", len(pending))
for _, tx := range pending {
log.Info("Sending transaction announcements", "count", len(pending), "tx", tx.Hex())
}
broadcastAnnoTxsLenGauge.Update(int64(len(pending)))
if err := p.sendPooledTransactionHashes(pending); err != nil {
log.Debug("Sending transaction announcements", "count", len(pending), "err", err)
log.Error("Sending transaction announcements", "count", len(pending), "err", err)
broadcastAnnoTxsFailMeter.Mark(1)
fail <- err
return
Expand Down
21 changes: 15 additions & 6 deletions eth/protocols/eth/handlers.go
Original file line number Diff line number Diff line change
Expand Up @@ -338,12 +338,15 @@ func handleNewPooledTransactionHashes(backend Backend, msg Decoder, peer *Peer)
}
ann := new(NewPooledTransactionHashesPacket)
if err := msg.Decode(ann); err != nil {
log.Debug("Failed to decode `NewPooledTransactionHashesPacket`", "peer", peer.String(), "err", err)
log.Error("Failed to decode `NewPooledTransactionHashesPacket`", "peer", peer.String(), "err", err)
newPooledTxHashesFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
// Schedule all the unknown hashes for retrieval
log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann))
for _, hash := range *ann {
log.Info("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
}
Comment on lines +341 to +349
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider the performance impact of verbose logging

The changes improve error visibility by upgrading the log level for decoding errors from Debug to Error, which is good. However, the new Info level log for each transaction hash (line 348) might be too verbose and could potentially impact performance, especially when dealing with a large number of transactions.

Consider one of the following options:

  1. Remove the loop that logs each hash individually.
  2. Change the log level from Info to Debug for individual hash logging.
  3. Add a condition to only log individual hashes when their count is below a certain threshold.

Example implementation for option 3:

 log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann))
-for _, hash := range *ann {
-    log.Info("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
+if len(*ann) <= 10 {
+    for _, hash := range *ann {
+        log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
+    }
 }

This change will limit the logging of individual hashes to cases where there are 10 or fewer transactions, reducing the verbosity for larger batches.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Error("Failed to decode `NewPooledTransactionHashesPacket`", "peer", peer.String(), "err", err)
newPooledTxHashesFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
// Schedule all the unknown hashes for retrieval
log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann))
for _, hash := range *ann {
log.Info("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
}
log.Error("Failed to decode `NewPooledTransactionHashesPacket`", "peer", peer.String(), "err", err)
newPooledTxHashesFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
// Schedule all the unknown hashes for retrieval
log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann))
if len(*ann) <= 10 {
for _, hash := range *ann {
log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
}
}

newPooledTxHashesLenGauge.Update(int64(len(*ann)))
for _, hash := range *ann {
peer.markTransaction(hash)
Expand All @@ -355,12 +358,15 @@ func handleGetPooledTransactions66(backend Backend, msg Decoder, peer *Peer) err
// Decode the pooled transactions retrieval message
var query GetPooledTransactionsPacket66
if err := msg.Decode(&query); err != nil {
log.Debug("Failed to decode `GetPooledTransactionsPacket66`", "peer", peer.String(), "err", err)
log.Error("Failed to decode `GetPooledTransactionsPacket66`", "peer", peer.String(), "err", err)
getPooledTxsFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
hashes, txs := answerGetPooledTransactions(backend, query.GetPooledTransactionsPacket, peer)
log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes))
for _, hash := range hashes {
log.Info("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
}
Comment on lines +361 to +369
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consistent logging strategy needed

Similar to the changes in handleNewPooledTransactionHashes, this function has improved error visibility by upgrading the log level for decoding errors. However, it also introduces potentially verbose logging for individual transaction hashes.

To maintain consistency and prevent performance issues, consider applying the same logging strategy as suggested for handleNewPooledTransactionHashes. For example:

 log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes))
-for _, hash := range hashes {
-    log.Info("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
+if len(hashes) <= 10 {
+    for _, hash := range hashes {
+        log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
+    }
 }

This change will limit the logging of individual hashes to cases where there are 10 or fewer transactions, reducing the verbosity for larger batches while maintaining consistency with the previous function.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Error("Failed to decode `GetPooledTransactionsPacket66`", "peer", peer.String(), "err", err)
getPooledTxsFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
hashes, txs := answerGetPooledTransactions(backend, query.GetPooledTransactionsPacket, peer)
log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes))
for _, hash := range hashes {
log.Info("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
}
log.Error("Failed to decode `GetPooledTransactionsPacket66`", "peer", peer.String(), "err", err)
getPooledTxsFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
hashes, txs := answerGetPooledTransactions(backend, query.GetPooledTransactionsPacket, peer)
log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes))
if len(hashes) <= 10 {
for _, hash := range hashes {
log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
}
}

getPooledTxsQueryLenGauge.Update(int64(len(query.GetPooledTransactionsPacket)))
getPooledTxsRetrievedLenGauge.Update(int64(len(hashes)))
return peer.ReplyPooledTransactionsRLP(query.RequestId, hashes, txs)
Expand Down Expand Up @@ -403,16 +409,19 @@ func handleTransactions(backend Backend, msg Decoder, peer *Peer) error {
var txs TransactionsPacket
if err := msg.Decode(&txs); err != nil {
handleTxsFailMeter.Mark(1)
log.Debug("Failed to decode `TransactionsPacket`", "peer", peer.String(), "err", err)
log.Error("Failed to decode `TransactionsPacket`", "peer", peer.String(), "err", err)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
log.Debug("handleTransactions", "peer", peer.String(), "len(txs)", len(txs))
for _, tx := range txs {
log.Info("handleTransactions", "peer", peer.String(), "len(txs)", len(txs), "tx", tx.Hash().Hex())
}
Comment on lines +412 to +418
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve logging consistency and nil transaction handling

The changes in this function follow a similar pattern to the previous ones, with improved error visibility and potentially verbose transaction logging. Additionally, there's a new check for nil transactions.

  1. Verbose logging:
    The Info level log for each transaction (lines 416-418) might impact performance for large transaction batches.

  2. Nil transaction handling:
    The new error log for nil transactions (line 424) is a good addition for improved error reporting.

Consider the following changes:

  1. Adjust the transaction logging to be consistent with previous functions:
 log.Debug("handleTransactions", "peer", peer.String(), "len(txs)", len(txs))
-for _, tx := range txs {
-    log.Info("handleTransactions", "peer", peer.String(), "len(txs)", len(txs), "tx", tx.Hash().Hex())
+if len(txs) <= 10 {
+    for _, tx := range txs {
+        log.Debug("handleTransactions", "peer", peer.String(), "len(txs)", len(txs), "tx", tx.Hash().Hex())
+    }
 }
  1. Consider adding a metric for nil transactions:
 if tx == nil {
     handleTxsNilMeter.Mark(1)
     log.Error("handleTransactions: transaction is nil", "peer", peer.String(), "i", i)
+    nilTxCounter.Inc(1)
     return fmt.Errorf("%w: transaction %d is nil", errDecode, i)
 }

These changes will maintain consistency with previous functions and provide additional metrics for monitoring nil transactions.

Also applies to: 424-425

handleTxsLenGauge.Update(int64(len(txs)))
for i, tx := range txs {
// Validate and mark the remote transaction
if tx == nil {
handleTxsNilMeter.Mark(1)
log.Debug("handleTransactions: transaction is nil", "peer", peer.String(), "i", i)
log.Error("handleTransactions: transaction is nil", "peer", peer.String(), "i", i)
return fmt.Errorf("%w: transaction %d is nil", errDecode, i)
}
peer.markTransaction(tx.Hash())
Expand All @@ -429,7 +438,7 @@ func handlePooledTransactions66(backend Backend, msg Decoder, peer *Peer) error
var txs PooledTransactionsPacket66
if err := msg.Decode(&txs); err != nil {
pooledTxs66FailMeter.Mark(1)
log.Debug("Failed to decode `PooledTransactionsPacket66`", "peer", peer.String(), "err", err)
log.Error("Failed to decode `PooledTransactionsPacket66`", "peer", peer.String(), "err", err)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
log.Debug("handlePooledTransactions66", "peer", peer.String(), "len(txs)", len(txs.PooledTransactionsPacket))
Expand All @@ -438,7 +447,7 @@ func handlePooledTransactions66(backend Backend, msg Decoder, peer *Peer) error
// Validate and mark the remote transaction
if tx == nil {
pooledTxs66NilMeter.Mark(1)
log.Debug("handlePooledTransactions: transaction is nil", "peer", peer.String(), "i", i)
log.Error("handlePooledTransactions: transaction is nil", "peer", peer.String(), "i", i)
return fmt.Errorf("%w: transaction %d is nil", errDecode, i)
}
peer.markTransaction(tx.Hash())
Expand Down
4 changes: 4 additions & 0 deletions eth/protocols/eth/peer.go
Original file line number Diff line number Diff line change
Expand Up @@ -426,6 +426,10 @@ func (p *Peer) RequestTxs(hashes []common.Hash) error {
id := rand.Uint64()

log.Debug("Requesting transactions", "RequestId", id, "Peer.id", p.id, "count", len(hashes))
for _, hash := range hashes {
log.Info("Requesting transaction", "RequestId", id, "Peer.id", p.id, "count", len(hashes), "hash", hash)
}

peerRequestTxsCntGauge.Update(int64(len(hashes)))

requestTracker.Track(p.id, p.version, GetPooledTransactionsMsg, PooledTransactionsMsg, id)
Expand Down
Loading