Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix clippy warnings #500

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Fix rest of clippy warnings
  • Loading branch information
i1i1 committed Sep 23, 2022
commit 4660ab0c0a0a8a0dd4832737dcef6430cbf4a18d
11 changes: 11 additions & 0 deletions backend/common/src/assign_id.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,17 @@ pub struct AssignId<Id, Details> {
_id_type: std::marker::PhantomData<Id>,
}

impl<Id, Details> Default for AssignId<Id, Details>
where
Details: Eq + Hash,
Id: From<usize> + Copy,
usize: From<Id>,
{
fn default() -> Self {
Self::new()
}
}
Comment on lines +34 to +43
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess #[derive(Default)] didn't work? I'd prefer that if possible :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BiMap doesn't impl Default


impl<Id, Details> AssignId<Id, Details>
where
Details: Eq + Hash,
Expand Down
2 changes: 2 additions & 0 deletions backend/common/src/dense_map.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
/// Items are keyed by an Id, which can be any type you wish, but
/// must be convertible to/from a `usize`. This promotes using a
/// custom Id type to talk about items in the map.
#[derive(Default)]
pub struct DenseMap<Id, T> {
/// List of retired indexes that can be re-used
retired: Vec<usize>,
Expand Down Expand Up @@ -108,6 +109,7 @@ where
.filter_map(|(id, item)| Some((id.into(), item.as_mut()?)))
}

#[allow(clippy::should_implement_trait)]
pub fn into_iter(self) -> impl Iterator<Item = (Id, T)> {
self.items
.into_iter()
Expand Down
1 change: 1 addition & 0 deletions backend/common/src/multi_map_unique.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ use std::hash::Hash;
/// A map where each key can contain multiple values. We enforce that a value
/// only ever belongs to one key at a time (the latest key it was inserted
/// against).
#[derive(Default)]
pub struct MultiMapUnique<K, V> {
value_to_key: HashMap<V, K>,
key_to_values: HashMap<K, HashSet<V>>,
Expand Down
2 changes: 1 addition & 1 deletion backend/common/src/node_message.rs
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ mod tests {
// know whether things can (de)serialize to bincode or not at runtime without failing unless
// we test the different types we want to (de)serialize ourselves. We just need to test each
// type, not each variant.
fn bincode_can_serialize_and_deserialize<'de, T>(item: T)
fn bincode_can_serialize_and_deserialize<T>(item: T)
where
T: Serialize + serde::de::DeserializeOwned,
{
Expand Down
8 changes: 7 additions & 1 deletion backend/common/src/rolling_total.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,17 @@ pub struct RollingTotalBuilder<Time: TimeSource = SystemTimeSource> {
time_source: Time,
}

impl Default for RollingTotalBuilder {
fn default() -> Self {
Self::new()
}
}

impl RollingTotalBuilder {
/// Build a [`RollingTotal`] struct. By default,
/// the window_size is 10s, the granularity is 1s,
/// and system time is used.
pub fn new() -> RollingTotalBuilder<SystemTimeSource> {
pub fn new() -> Self {
Self {
window_size_multiple: 10,
granularity: Duration::from_secs(1),
Expand Down
4 changes: 4 additions & 0 deletions backend/common/src/ws_client/receiver.rs
Original file line number Diff line number Diff line change
Expand Up @@ -68,4 +68,8 @@ impl RecvMessage {
RecvMessage::Text(s) => s.len(),
}
}

pub fn is_empty(&self) -> bool {
self.len() == 0
}
}
8 changes: 4 additions & 4 deletions backend/telemetry_core/src/aggregator/inner_loop.rs
Original file line number Diff line number Diff line change
Expand Up @@ -55,13 +55,13 @@ pub enum FromShardWebsocket {
Add {
local_id: ShardNodeId,
ip: std::net::IpAddr,
node: common::node_types::NodeDetails,
node: Box<common::node_types::NodeDetails>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::mem::size_of::<NodeDetails>() == 111

I think that is small enough to avoid to Box this one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but enum variant is around 369 bytes:

warning: large size difference between variants
  --> telemetry_core/src/aggregator/inner_loop.rs:48:1
   |
48 |   / pub enum FromShardWebsocket {
49 |   |     /// When the socket is opened, it'll send this first
50 |   |     /// so that we have a way to communicate back to it.
51 |   |     Initialize {
...    |
55 | / |     Add {
56 | | |         local_id: ShardNodeId,
57 | | |         ip: std::net::IpAddr,
58 | | |         node: common::node_types::NodeDetails,
59 | | |         genesis_hash: common::node_types::BlockHash,
60 | | |     },
   | |_|_____- the largest variant contains at least 369 bytes
61 |   |     /// Update/pass through details about a node.
62 | / |     Update {
63 | | |         local_id: ShardNodeId,
64 | | |         payload: Box<node_message::Payload>,
65 | | |     },
   | |_|_____- the second-largest variant contains at least 16 bytes
...    |
69 |   |     Disconnected,
70 |   | }
   |   |_^ the entire enum is at least 376 bytes
   |
   = note: `#[warn(clippy::large_enum_variant)]` on by default
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#large_enum_variant
help: consider boxing the large fields to reduce the total size of the enum
   |
58 |         node: Box<common::node_types::NodeDetails>,
   |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

genesis_hash: common::node_types::BlockHash,
},
/// Update/pass through details about a node.
Update {
local_id: ShardNodeId,
payload: node_message::Payload,
payload: Box<node_message::Payload>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::mem::size_of::<Payload>() == 344, I'm still in favor to avoid to box this but not sure where to cut the threshold without benching it ^^.

Clippy seems to use 200 as default, https://rust-lang.github.io/rust-clippy/master/#large_enum_variant

},
/// Tell the aggregator that a node has been removed when it disconnects.
Remove { local_id: ShardNodeId },
Expand Down Expand Up @@ -327,7 +327,7 @@ impl InnerLoop {
} => {
// Conditionally modify the node's details to include the IP address.
node.ip = self.expose_node_ips.then_some(ip.to_string().into());
match self.node_state.add_node(genesis_hash, node) {
match self.node_state.add_node(genesis_hash, *node) {
state::AddNodeResult::ChainOnDenyList => {
if let Some(shard_conn) = self.shard_channels.get_mut(&shard_conn_id) {
let _ = shard_conn.send(ToShardWebsocket::Mute {
Expand Down Expand Up @@ -411,7 +411,7 @@ impl InnerLoop {

let mut feed_message_serializer = FeedMessageSerializer::new();
self.node_state
.update_node(node_id, payload, &mut feed_message_serializer);
.update_node(node_id, *payload, &mut feed_message_serializer);

if let Some(chain) = self.node_state.get_chain_by_node_id(node_id) {
let genesis_hash = chain.genesis_hash();
Expand Down
1 change: 1 addition & 0 deletions backend/telemetry_core/src/aggregator/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.

#[allow(clippy::module_inception)]
mod aggregator;
mod aggregator_set;
mod inner_loop;
Expand Down
7 changes: 5 additions & 2 deletions backend/telemetry_core/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -279,12 +279,15 @@ where
genesis_hash,
} => FromShardWebsocket::Add {
ip,
node,
node: Box::new(node),
genesis_hash,
local_id,
},
internal_messages::FromShardAggregator::UpdateNode { payload, local_id } => {
FromShardWebsocket::Update { local_id, payload }
FromShardWebsocket::Update {
local_id,
payload: Box::new(payload),
}
}
internal_messages::FromShardAggregator::RemoveNode { local_id } => {
FromShardWebsocket::Remove { local_id }
Expand Down
49 changes: 27 additions & 22 deletions backend/telemetry_core/src/state/chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ use common::node_types::BlockHash;
use common::node_types::{Block, Timestamp};
use common::{id_type, time, DenseMap, MostSeen, NumStats};
use once_cell::sync::Lazy;
use std::cmp::Ordering;
use std::collections::HashSet;
use std::str::FromStr;
use std::time::{Duration, Instant};
Expand Down Expand Up @@ -252,30 +253,34 @@ impl Chain {
};

if node.update_block(*block) {
if block.height > self.best.height {
self.best = *block;
log::debug!(
"[{}] [nodes={}] new best block={}/{:?}",
self.labels.best(),
nodes_len,
self.best.height,
self.best.hash,
);
if let Some(timestamp) = self.timestamp {
self.block_times.push(now.saturating_sub(timestamp));
self.average_block_time = Some(self.block_times.average());
match block.height.cmp(&self.best.height) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a fan of this change if think the old code was easier to read.

which clippy lint is this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ordering::Greater => {
self.best = *block;
log::debug!(
"[{}] [nodes={}] new best block={}/{:?}",
self.labels.best(),
nodes_len,
self.best.height,
self.best.hash,
);
if let Some(timestamp) = self.timestamp {
self.block_times.push(now.saturating_sub(timestamp));
self.average_block_time = Some(self.block_times.average());
}
self.timestamp = Some(now);
feed.push(feed_message::BestBlock(
self.best.height,
now,
self.average_block_time,
));
propagation_time = Some(0);
}
self.timestamp = Some(now);
feed.push(feed_message::BestBlock(
self.best.height,
now,
self.average_block_time,
));
propagation_time = Some(0);
} else if block.height == self.best.height {
if let Some(timestamp) = self.timestamp {
propagation_time = Some(now.saturating_sub(timestamp));
Ordering::Equal => {
if let Some(timestamp) = self.timestamp {
propagation_time = Some(now.saturating_sub(timestamp))
}
}
Ordering::Less => (),
}

if let Some(details) = node.update_details(now, propagation_time) {
Expand Down
2 changes: 1 addition & 1 deletion backend/telemetry_core/src/state/counter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ where
K: Sized + std::hash::Hash + Eq,
{
/// Either adds or removes a single occurence of a given `key`.
pub fn modify<'a, Q>(&mut self, key: Option<&'a Q>, op: CounterValue)
pub fn modify<Q>(&mut self, key: Option<&'_ Q>, op: CounterValue)
where
Q: ?Sized + std::hash::Hash + Eq,
K: std::borrow::Borrow<Q>,
Expand Down
1 change: 1 addition & 0 deletions backend/telemetry_core/src/state/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ mod chain_stats;
mod counter;
mod node;

#[allow(clippy::module_inception)]
mod state;

pub use node::Node;
Expand Down
4 changes: 2 additions & 2 deletions backend/telemetry_core/src/state/state.rs
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,7 @@ mod test {
assert_eq!(&*add_node_result.old_chain_label, "");
assert_eq!(add_node_result.new_chain_label, "Chain One");
assert_eq!(add_node_result.chain_node_count, 1);
assert_eq!(add_node_result.has_chain_label_changed, true);
assert!(add_node_result.has_chain_label_changed);

let add_result = state.add_node(chain1_genesis, node("A", "Chain One"));

Expand All @@ -336,7 +336,7 @@ mod test {
assert_eq!(&*add_node_result.old_chain_label, "Chain One");
assert_eq!(add_node_result.new_chain_label, "Chain One");
assert_eq!(add_node_result.chain_node_count, 2);
assert_eq!(add_node_result.has_chain_label_changed, false);
assert!(!add_node_result.has_chain_label_changed);
}

#[test]
Expand Down
8 changes: 3 additions & 5 deletions backend/telemetry_core/tests/e2e_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -592,14 +592,12 @@ async fn e2e_node_banned_if_it_sends_too_much_data() {
node_tx.is_closed()
}

assert_eq!(
try_send_data(1000, 10, 1000).await,
false,
assert!(
!try_send_data(1000, 10, 1000).await,
"shouldn't be closed; we didn't exceed 10x threshold"
);
assert_eq!(
assert!(
try_send_data(999, 10, 1000).await,
true,
"should be closed; we sent just over 10x the block threshold"
);
}
Expand Down
1 change: 0 additions & 1 deletion backend/telemetry_core/tests/soak_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,6 @@ async fn run_soak_test(opts: SoakTestOpts) {
ServerOpts {
release_mode: true,
log_output: opts.log_output,
..Default::default()
},
CoreOpts {
worker_threads: opts.core_worker_threads,
Expand Down
17 changes: 12 additions & 5 deletions backend/telemetry_shard/src/aggregator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -65,13 +65,13 @@ pub enum FromWebsocket {
Add {
message_id: node_message::NodeMessageId,
ip: std::net::IpAddr,
node: common::node_types::NodeDetails,
node: Box<common::node_types::NodeDetails>,
genesis_hash: BlockHash,
},
/// Update/pass through details about a node.
Update {
message_id: node_message::NodeMessageId,
payload: node_message::Payload,
payload: Box<node_message::Payload>,
},
/// remove a node with the given message ID
Remove {
Expand Down Expand Up @@ -118,7 +118,11 @@ impl Aggregator {
Message::Disconnected => ToAggregator::DisconnectedFromTelemetryCore,
Message::Data(data) => ToAggregator::FromTelemetryCore(data),
};
if let Err(_) = tx_to_aggregator2.send_async(msg_to_aggregator).await {
if tx_to_aggregator2
.send_async(msg_to_aggregator)
.await
.is_err()
{
// This will close the ws channels, which themselves log messages.
break;
}
Expand Down Expand Up @@ -217,7 +221,7 @@ impl Aggregator {
let _ = tx_to_telemetry_core
.send_async(FromShardAggregator::AddNode {
ip,
node,
node: *node,
genesis_hash,
local_id,
})
Expand Down Expand Up @@ -248,7 +252,10 @@ impl Aggregator {

// Send the message to the telemetry core with this local ID:
let _ = tx_to_telemetry_core
.send_async(FromShardAggregator::UpdateNode { local_id, payload })
.send_async(FromShardAggregator::UpdateNode {
local_id,
payload: *payload,
})
.await;
}
ToAggregator::FromWebsocket(conn_id, FromWebsocket::Remove { message_id }) => {
Expand Down
2 changes: 1 addition & 1 deletion backend/telemetry_shard/src/connection.rs
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ where
// Throw away any pending messages from the incoming channel so that it
// doesn't get filled up and begin blocking while we're looping and waiting
// for a reconnection.
while let Ok(_) = rx_in.try_recv() {}
while rx_in.try_recv().is_ok() {}

// Try to connect. If connection established, we serialize and forward messages
// to/from the core. If the external channels break, we end for good. If the internal
Expand Down
4 changes: 3 additions & 1 deletion backend/telemetry_shard/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -205,6 +205,7 @@ async fn start_server(opts: Opts) -> anyhow::Result<()> {
}

/// This takes care of handling messages from an established socket connection.
#[allow(clippy::too_many_arguments)]
async fn handle_node_websocket_connection<S>(
real_addr: IpAddr,
ws_send: http_utils::WsSender,
Expand Down Expand Up @@ -359,14 +360,15 @@ where
let _ = tx_to_aggregator.send(FromWebsocket::Add {
message_id,
ip: real_addr,
node: info.node,
node: Box::new(info.node),
genesis_hash: info.genesis_hash,
}).await;
}
// Anything that's not an "Add" is an Update. The aggregator will ignore
// updates against a message_id that hasn't first been Added, above.
else if let Some(last_seen) = allowed_message_ids.get_mut(&message_id) {
niklasad1 marked this conversation as resolved.
Show resolved Hide resolved
*last_seen = Instant::now();
let payload = Box::new(payload);
if let Err(e) = tx_to_aggregator.send(FromWebsocket::Update { message_id, payload } ).await {
log::error!("Failed to send node message to aggregator: {}", e);
continue;
Expand Down
1 change: 1 addition & 0 deletions backend/test_utils/src/server/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.

#[allow(clippy::module_inception)]
mod server;
mod utils;

Expand Down
8 changes: 4 additions & 4 deletions backend/test_utils/src/server/server.rs
Original file line number Diff line number Diff line change
Expand Up @@ -502,10 +502,10 @@ impl Command {
}
}

impl Into<TokioCommand> for Command {
fn into(self) -> TokioCommand {
let mut cmd = TokioCommand::new(self.command);
cmd.args(self.args);
impl From<Command> for TokioCommand {
fn from(Command { command, args }: Command) -> Self {
let mut cmd = Self::new(command);
cmd.args(args);
cmd
}
}
Loading