Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rust 1.80.0 lints #6183

Merged
merged 1 commit into from
Jul 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion beacon_node/beacon_chain/src/beacon_chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1450,7 +1450,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
/// Returns the `BeaconState` the current slot (viz., `self.slot()`).
///
/// - A reference to the head state (note: this keeps a read lock on the head, try to use
/// sparingly).
/// sparingly).
/// - The head state, but with skipped slots (for states later than the head).
///
/// Returns `None` when there is an error skipping to a future state or the slot clock cannot
Expand Down
2 changes: 2 additions & 0 deletions beacon_node/beacon_chain/src/block_verification.rs
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,9 @@ pub enum BlockError<E: EthSpec> {
/// 1. The block proposer is faulty
/// 2. We received the blob over rpc and it is invalid (inconsistent w.r.t the block).
/// 3. It is an internal error
///
/// For all these cases, we cannot penalize the peer that gave us the block.
///
/// TODO: We may need to penalize the peer that gave us a potentially invalid rpc blob.
/// https://github.com/sigp/lighthouse/issues/4546
AvailabilityCheck(AvailabilityCheckError),
Expand Down
4 changes: 2 additions & 2 deletions beacon_node/beacon_chain/src/test_utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2511,9 +2511,9 @@ where
/// Creates two forks:
///
/// - The "honest" fork: created by the `honest_validators` who have built `honest_fork_blocks`
/// on the head
/// on the head
/// - The "faulty" fork: created by the `faulty_validators` who skipped a slot and
/// then built `faulty_fork_blocks`.
/// then built `faulty_fork_blocks`.
///
/// Returns `(honest_head, faulty_head)`, the roots of the blocks at the top of each chain.
pub async fn generate_two_forks_by_skipping_a_block(
Expand Down
2 changes: 1 addition & 1 deletion beacon_node/eth1/src/block_cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ impl BlockCache {
///
/// - If the cache is not empty and `item.block.block_number - 1` is not already in `self`.
/// - If `item.block.block_number` is in `self`, but is not identical to the supplied
/// `Eth1Snapshot`.
/// `Eth1Snapshot`.
/// - If `item.block.timestamp` is prior to the parent.
pub fn insert_root_or_child(&mut self, block: Eth1Block) -> Result<(), Error> {
let expected_block_number = self
Expand Down
2 changes: 1 addition & 1 deletion beacon_node/genesis/src/eth1_genesis_service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -352,7 +352,7 @@ impl Eth1GenesisService {
///
/// - `Ok(genesis_state)`: if all went well.
/// - `Err(e)`: if the given `eth1_block` was not a viable block to trigger genesis or there was
/// an internal error.
/// an internal error.
fn genesis_from_eth1_block<E: EthSpec>(
&self,
eth1_block: Eth1Block,
Expand Down
1 change: 1 addition & 0 deletions beacon_node/lighthouse_network/gossipsub/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ categories = ["network-programming", "asynchronous"]

[features]
wasm-bindgen = ["getrandom/js"]
rsa = []
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn't sure if this feature is ever supposed to be used, but adding it like this fixes the lint


[dependencies]
async-channel = { workspace = true }
Expand Down
16 changes: 8 additions & 8 deletions beacon_node/lighthouse_network/gossipsub/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -43,16 +43,16 @@
//! implementations, due to undefined elements in the current specification.
//!
//! - **Topics** - In gossipsub, topics configurable by the `hash_topics` configuration parameter.
//! Topics are of type [`TopicHash`]. The current go implementation uses raw utf-8 strings, and this
//! is default configuration in rust-libp2p. Topics can be hashed (SHA256 hashed then base64
//! encoded) by setting the `hash_topics` configuration parameter to true.
//! Topics are of type [`TopicHash`]. The current go implementation uses raw utf-8 strings, and this
//! is default configuration in rust-libp2p. Topics can be hashed (SHA256 hashed then base64
//! encoded) by setting the `hash_topics` configuration parameter to true.
//!
//! - **Sequence Numbers** - A message on the gossipsub network is identified by the source
//! [`PeerId`](libp2p_identity::PeerId) and a nonce (sequence number) of the message. The sequence numbers in
//! this implementation are sent as raw bytes across the wire. They are 64-bit big-endian unsigned
//! integers. When messages are signed, they are monotonically increasing integers starting from a
//! random value and wrapping around u64::MAX. When messages are unsigned, they are chosen at random.
//! NOTE: These numbers are sequential in the current go implementation.
//! [`PeerId`](libp2p_identity::PeerId) and a nonce (sequence number) of the message. The sequence numbers in
//! this implementation are sent as raw bytes across the wire. They are 64-bit big-endian unsigned
//! integers. When messages are signed, they are monotonically increasing integers starting from a
//! random value and wrapping around u64::MAX. When messages are unsigned, they are chosen at random.
//! NOTE: These numbers are sequential in the current go implementation.
//!
//! # Peer Discovery
//!
Expand Down
6 changes: 3 additions & 3 deletions beacon_node/lighthouse_network/src/peer_manager/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -918,9 +918,9 @@ impl<E: EthSpec> PeerManager<E> {
/// number should be set low as an absolute lower bound to maintain peers on the sync
/// committees.
/// - Do not prune trusted peers. NOTE: This means if a user has more trusted peers than the
/// excess peer limit, all of the following logic is subverted as we will not prune any peers.
/// Also, the more trusted peers a user has, the less room Lighthouse has to efficiently manage
/// its peers across the subnets.
/// excess peer limit, all of the following logic is subverted as we will not prune any peers.
/// Also, the more trusted peers a user has, the less room Lighthouse has to efficiently manage
/// its peers across the subnets.
///
/// Prune peers in the following order:
/// 1. Remove worst scoring peers
Expand Down
1 change: 1 addition & 0 deletions beacon_node/network/src/sync/block_lookups/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -214,6 +214,7 @@ impl<T: BeaconChainTypes> BlockLookups<T> {
/// Check if this new lookup extends a bad chain:
/// - Extending `child_block_root_trigger` would exceed the max depth
/// - `block_root_to_search` is a failed chain
///
/// Returns true if the lookup is created or already exists
pub fn search_parent_of_child(
&mut self,
Expand Down
8 changes: 4 additions & 4 deletions beacon_node/network/src/sync/manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -448,12 +448,12 @@ impl<T: BeaconChainTypes> SyncManager<T> {
///
/// The logic for which sync should be running is as follows:
/// - If there is a range-sync running (or required) pause any backfill and let range-sync
/// complete.
/// complete.
/// - If there is no current range sync, check for any requirement to backfill and either
/// start/resume a backfill sync if required. The global state will be BackFillSync if a
/// backfill sync is running.
/// start/resume a backfill sync if required. The global state will be BackFillSync if a
/// backfill sync is running.
/// - If there is no range sync and no required backfill and we have synced up to the currently
/// known peers, we consider ourselves synced.
/// known peers, we consider ourselves synced.
fn update_sync_state(&mut self) {
let new_state: SyncState = match self.range_sync.state() {
Err(e) => {
Expand Down
2 changes: 1 addition & 1 deletion beacon_node/network/src/sync/range_sync/range.rs
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
//! - Only one finalized chain can sync at a time
//! - The finalized chain with the largest peer pool takes priority.
//! - As one finalized chain completes, others are checked to see if we they can be continued,
//! otherwise they are removed.
//! otherwise they are removed.
//!
//! ## Head Chain Sync
//!
Expand Down
2 changes: 1 addition & 1 deletion beacon_node/operation_pool/src/max_cover.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ use itertools::Itertools;
/// * `item`: something that implements this trait
/// * `element`: something contained in a set, and covered by the covering set of an item
/// * `object`: something extracted from an item in order to comprise a solution
/// See: https://en.wikipedia.org/wiki/Maximum_coverage_problem
/// See: https://en.wikipedia.org/wiki/Maximum_coverage_problem
pub trait MaxCover: Clone {
/// The result type, of which we would eventually like a collection of maximal quality.
type Object: Clone;
Expand Down
4 changes: 2 additions & 2 deletions common/lighthouse_metrics/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@
//! [Prometheus docs](https://prometheus.io/docs/concepts/metric_types/)):
//!
//! - `Histogram`: used with `start_timer(..)` and `stop_timer(..)` to record durations (e.g.,
//! block processing time).
//! block processing time).
//! - `IncCounter`: used to represent an ideally ever-growing, never-shrinking integer (e.g.,
//! number of block processing requests).
//! number of block processing requests).
//! - `IntGauge`: used to represent an varying integer (e.g., number of attestations per block).
//!
//! ## Important
Expand Down
2 changes: 1 addition & 1 deletion common/logging/src/async_record.rs
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ impl Serialize for AsyncRecord {
// Convoluted pattern to avoid binding `format_args!` to a temporary.
// See: https://stackoverflow.com/questions/56304313/cannot-use-format-args-due-to-temporary-value-is-freed-at-the-end-of-this-state
let mut f = |msg: std::fmt::Arguments| {
map_serializer.serialize_entry("msg", &msg.to_string())?;
map_serializer.serialize_entry("msg", msg.to_string())?;

let record = Record::new(&rs, &msg, BorrowedKV(&(*kv)));
self.logger_values
Expand Down
2 changes: 1 addition & 1 deletion common/validator_dir/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
//! Provides:
//!
//! - `ValidatorDir`: manages a directory containing validator keypairs, deposit info and other
//! things.
//! things.
//!
//! This crate is intended to be used by the account manager to create validators and the validator
//! client to load those validators.
Expand Down
2 changes: 1 addition & 1 deletion consensus/proto_array/src/proto_array.rs
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ impl ProtoArray {
/// - Update the node's weight with the corresponding delta.
/// - Back-propagate each node's delta to its parents delta.
/// - Compare the current node with the parents best-child, updating it if the current node
/// should become the best child.
/// should become the best child.
/// - If required, update the parents best-descendant with the current node or its best-descendant.
#[allow(clippy::too_many_arguments)]
pub fn apply_score_changes<E: EthSpec>(
Expand Down
2 changes: 1 addition & 1 deletion consensus/proto_array/src/proto_array_fork_choice.rs
Original file line number Diff line number Diff line change
Expand Up @@ -896,7 +896,7 @@ impl ProtoArrayForkChoice {
///
/// - If a value in `indices` is greater to or equal to `indices.len()`.
/// - If some `Hash256` in `votes` is not a key in `indices` (except for `Hash256::zero()`, this is
/// always valid).
/// always valid).
fn compute_deltas(
indices: &HashMap<Hash256, usize>,
votes: &mut ElasticList<VoteTracker>,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ where
/// are valid.
///
/// * : _Does not verify any signatures in `block.body.deposits`. A block is still valid if it
/// contains invalid signatures on deposits._
/// contains invalid signatures on deposits._
///
/// See `Self::verify` for more detail.
pub fn verify_entire_block<Payload: AbstractExecPayload<E>>(
Expand Down
4 changes: 2 additions & 2 deletions consensus/swap_or_not_shuffle/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@
//! There are two functions exported by this crate:
//!
//! - `compute_shuffled_index`: given a single index, computes the index resulting from a shuffle.
//! Runs in less time than it takes to run `shuffle_list`.
//! Runs in less time than it takes to run `shuffle_list`.
//! - `shuffle_list`: shuffles an entire list in-place. Runs in less time than it takes to run
//! `compute_shuffled_index` on each index.
//! `compute_shuffled_index` on each index.
//!
//! In general, use `compute_shuffled_index` to calculate the shuffling of a small subset of a much
//! larger list (~250x larger is a good guide, but solid figures yet to be calculated).
Expand Down
2 changes: 1 addition & 1 deletion consensus/types/src/shuffling_id.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ use std::hash::Hash;
///
/// - The epoch for which the shuffling should be effective.
/// - A block root, where this is the root at the *last* slot of the penultimate epoch. I.e., the
/// final block which contributed a randao reveal to the seed for the shuffling.
/// final block which contributed a randao reveal to the seed for the shuffling.
///
/// The struct stores exactly that 2-tuple.
#[derive(Debug, PartialEq, Eq, Clone, Hash, Serialize, Deserialize, Encode, Decode)]
Expand Down
2 changes: 1 addition & 1 deletion lighthouse/tests/beacon_node.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2253,7 +2253,7 @@ fn slasher_broadcast_flag_false() {
});
}

#[cfg(all(feature = "lmdb"))]
#[cfg(all(feature = "slasher-lmdb"))]
#[test]
fn slasher_backend_override_to_default() {
// Hard to test this flag because all but one backend is disabled by default and the backend
Expand Down
16 changes: 8 additions & 8 deletions slasher/src/database.rs
Original file line number Diff line number Diff line change
Expand Up @@ -409,16 +409,16 @@ impl<E: EthSpec> SlasherDB<E> {
for target_epoch in (start_epoch..max_target.as_u64()).map(Epoch::new) {
txn.put(
&self.databases.attesters_db,
&AttesterKey::new(validator_index, target_epoch, &self.config),
AttesterKey::new(validator_index, target_epoch, &self.config),
CompactAttesterRecord::null().as_bytes(),
)?;
}
}

txn.put(
&self.databases.attesters_max_targets_db,
&CurrentEpochKey::new(validator_index),
&max_target.as_ssz_bytes(),
CurrentEpochKey::new(validator_index),
max_target.as_ssz_bytes(),
)?;
Ok(())
}
Expand All @@ -444,8 +444,8 @@ impl<E: EthSpec> SlasherDB<E> {
) -> Result<(), Error> {
txn.put(
&self.databases.current_epochs_db,
&CurrentEpochKey::new(validator_index),
&current_epoch.as_ssz_bytes(),
CurrentEpochKey::new(validator_index),
current_epoch.as_ssz_bytes(),
)?;
Ok(())
}
Expand Down Expand Up @@ -621,7 +621,7 @@ impl<E: EthSpec> SlasherDB<E> {

txn.put(
&self.databases.attesters_db,
&AttesterKey::new(validator_index, target_epoch, &self.config),
AttesterKey::new(validator_index, target_epoch, &self.config),
indexed_attestation_id,
)?;

Expand Down Expand Up @@ -699,8 +699,8 @@ impl<E: EthSpec> SlasherDB<E> {
} else {
txn.put(
&self.databases.proposers_db,
&ProposerKey::new(proposer_index, slot),
&block_header.as_ssz_bytes(),
ProposerKey::new(proposer_index, slot),
block_header.as_ssz_bytes(),
)?;
Ok(ProposerSlashingStatus::NotSlashable)
}
Expand Down
4 changes: 2 additions & 2 deletions validator_client/src/http_api/api_secret.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@ pub const PK_LEN: usize = 33;
/// Provides convenience functions to ultimately provide:
///
/// - Verification of proof-of-knowledge of the public key in `self` for incoming HTTP requests,
/// via the `Authorization` header.
/// via the `Authorization` header.
///
/// The aforementioned scheme was first defined here:
///
/// https://github.com/sigp/lighthouse/issues/1269#issuecomment-649879855
///
///
/// This scheme has since been tweaked to remove VC response signing and secp256k1 key generation.
/// https://github.com/sigp/lighthouse/issues/5423
pub struct ApiSecret {
Expand Down
2 changes: 1 addition & 1 deletion validator_client/src/validator_store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -502,7 +502,7 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
/// Translate the per validator `builder_proposals`, `builder_boost_factor` and
/// `prefer_builder_proposals` to a boost factor, if available.
/// - If `prefer_builder_proposals` is true, set boost factor to `u64::MAX` to indicate a
/// preference for builder payloads.
/// preference for builder payloads.
/// - If `builder_boost_factor` is a value other than None, return its value as the boost factor.
/// - If `builder_proposals` is set to false, set boost factor to 0 to indicate a preference for
/// local payloads.
Expand Down
8 changes: 4 additions & 4 deletions watch/src/updater/handler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -112,14 +112,14 @@ impl<E: EthSpec> UpdateHandler<E> {

/// Performs a head update with the following steps:
/// 1. Pull the latest header from the beacon node and the latest canonical slot from the
/// database.
/// database.
/// 2. Loop back through the beacon node and database to find the first matching slot -> root
/// pair.
/// pair.
/// 3. Go back `MAX_EXPECTED_REORG_LENGTH` slots through the database ensuring it is
/// consistent with the beacon node. If a re-org occurs beyond this range, we cannot recover.
/// consistent with the beacon node. If a re-org occurs beyond this range, we cannot recover.
/// 4. Remove any invalid slots from the database.
/// 5. Sync all blocks between the first valid block of the database and the head of the beacon
/// chain.
/// chain.
///
/// In the event there are no slots present in the database, it will sync from the head block
/// block back to the first slot of the epoch.
Expand Down
Loading