Skip to content

Impl Clone and PartialEq for ChannelMonitor #2331

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions lightning/src/chain/chainmonitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -556,6 +556,25 @@ where C::Target: chain::Filter,
}
}

impl<ChannelSigner: WriteableEcdsaChannelSigner, C: Deref, T: Deref, F: Deref, L: Deref, P: Deref> ChainMonitor<ChannelSigner, C, T, F, L, P>
where C::Target: chain::Filter,
T::Target: BroadcasterInterface,
F::Target: FeeEstimator,
L::Target: Logger,
P::Target: Persist<ChannelSigner>,
ChannelSigner: Clone
{
/// Gets all the funding outpoints with each [`ChannelMonitor`] being monitored.
///
/// Note that [`ChannelMonitor`]s are not removed when a channel is closed as they are always
/// monitoring for on-chain state resolutions.
pub fn get_monitors(&self) -> HashMap<OutPoint, ChannelMonitor<ChannelSigner>> {
self.monitors.read().unwrap().iter().map(|(outpoint, holder)| {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, what's the expected use-case here? I'm not convinced a full clone of each monitor in a HashMap is the right API - if a user wants to look up a monitor by the funding txo, they should use the existing get_monitor fn. If they're looking to serialize a copy of each monitor this will result in unnecessary clones and a hashmap just to iterate over it. Instead, we could either return an impl Iterator or a vec of the funding outpoints?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's already a list_monitors that gives a Vec<Outpoint> and get_mointor(outpoint) but these require doing a bunch of locks. this replaces so we can get them all in one lock. The idea for this is to be able to get all of our monitors so we can create a backup without having to iterate over each one ourself.

Doing an iterator might be better than the hashmap here

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think the repeated mutexes is probably faster than cloning the full set of monitors, at least if they're not contested :). Plus it avoids potential heap fragmentation. I'd definitely prefer an iterator.

Copy link
Contributor Author

@benthecarman benthecarman Jun 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this can be an iterator because of the RwLock, maybe I'm doing something wrong but always getting returns a value referencing data owned by the current function error that seems to only be solved by doing an allocation

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmm, right, we can't do a straight iterator, we can get the list of keys and build an iterator over those, but we'd have to do map lookups for each key as a part of the next iteration (or in some iterator deref, ala LockedChannelMonitor, which could be adapted to hold a reference to the RwLockReadGuard). Honestly given monitors are pretty big that may well be more efficient, at least for your use-case of just iterating them all and serializing them.

(*outpoint, holder.monitor.clone())
}).collect()
}
}

impl<ChannelSigner: WriteableEcdsaChannelSigner, C: Deref, T: Deref, F: Deref, L: Deref, P: Deref>
chain::Listen for ChainMonitor<ChannelSigner, C, T, F, L, P>
where
Expand Down
29 changes: 24 additions & 5 deletions lightning/src/chain/channelmonitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ impl HolderSignedTx {

/// We use this to track static counterparty commitment transaction data and to generate any
/// justice or 2nd-stage preimage/timeout transactions.
#[derive(PartialEq, Eq)]
#[derive(Clone, PartialEq, Eq)]
struct CounterpartyCommitmentParameters {
counterparty_delayed_payment_base_key: PublicKey,
counterparty_htlc_base_key: PublicKey,
Expand Down Expand Up @@ -338,7 +338,7 @@ impl Readable for CounterpartyCommitmentParameters {
/// observed, as well as the transaction causing it.
///
/// Used to determine when the on-chain event can be considered safe from a chain reorganization.
#[derive(PartialEq, Eq)]
#[derive(Clone, PartialEq, Eq)]
struct OnchainEventEntry {
txid: Txid,
height: u32,
Expand Down Expand Up @@ -381,7 +381,7 @@ type CommitmentTxCounterpartyOutputInfo = Option<(u32, u64)>;

/// Upon discovering of some classes of onchain tx by ChannelMonitor, we may have to take actions on it
/// once they mature to enough confirmations (ANTI_REORG_DELAY)
#[derive(PartialEq, Eq)]
#[derive(Clone, PartialEq, Eq)]
enum OnchainEvent {
/// An outbound HTLC failing after a transaction is confirmed. Used
/// * when an outbound HTLC output is spent by us after the HTLC timed out
Expand Down Expand Up @@ -652,7 +652,7 @@ pub enum Balance {
}

/// An HTLC which has been irrevocably resolved on-chain, and has reached ANTI_REORG_DELAY.
#[derive(PartialEq, Eq)]
#[derive(Clone, PartialEq, Eq)]
struct IrrevocablyResolvedHTLC {
commitment_tx_output_idx: Option<u32>,
/// The txid of the transaction which resolved the HTLC, this may be a commitment (if the HTLC
Expand Down Expand Up @@ -725,7 +725,14 @@ pub struct ChannelMonitor<Signer: WriteableEcdsaChannelSigner> {
inner: Mutex<ChannelMonitorImpl<Signer>>,
}

#[derive(PartialEq)]
impl<Signer: WriteableEcdsaChannelSigner> Clone for ChannelMonitor<Signer> where Signer: Clone {
fn clone(&self) -> Self {
let inner = self.inner.lock().unwrap().clone();
ChannelMonitor { inner: Mutex::new(inner) }
}
}

#[derive(Clone, PartialEq)]
pub(crate) struct ChannelMonitorImpl<Signer: WriteableEcdsaChannelSigner> {
latest_update_id: u64,
commitment_transaction_number_obscure_factor: u64,
Expand Down Expand Up @@ -879,6 +886,18 @@ impl<Signer: WriteableEcdsaChannelSigner> PartialEq for ChannelMonitor<Signer> w
}
}

impl<Signer: WriteableEcdsaChannelSigner> PartialOrd for ChannelMonitor<Signer> where Signer: PartialEq {
fn partial_cmp(&self, other: &Self) -> Option<cmp::Ordering> {
Copy link
Collaborator

@TheBlueMatt TheBlueMatt Jun 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a super strange API to me - given Ord is generally used to insert into a set, I'd expect the Ord implementation to basically just be over the funding txos, not over the update_id and only defined for a specific monitor.

Instead, we could have a specific method that has a name that makes clear what the behavior here is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I can see this being a bastardization of Ord, will change to a function instead

let inner = self.inner.lock().unwrap();
let other_inner = other.inner.lock().unwrap();
if inner.get_funding_txo() != other_inner.get_funding_txo() {
return None;
}

inner.latest_update_id.partial_cmp(&other_inner.latest_update_id)
}
}

impl<Signer: WriteableEcdsaChannelSigner> Writeable for ChannelMonitor<Signer> {
fn write<W: Writer>(&self, writer: &mut W) -> Result<(), Error> {
self.inner.lock().unwrap().write(writer)
Expand Down
7 changes: 5 additions & 2 deletions lightning/src/chain/onchaintx.rs
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ const MAX_ALLOC_SIZE: usize = 64*1024;
/// transaction causing it.
///
/// Used to determine when the on-chain event can be considered safe from a chain reorganization.
#[derive(PartialEq, Eq)]
#[derive(Clone, PartialEq, Eq)]
struct OnchainEventEntry {
txid: Txid,
height: u32,
Expand All @@ -76,7 +76,7 @@ impl OnchainEventEntry {

/// Events for claims the [`OnchainTxHandler`] has generated. Once the events are considered safe
/// from a chain reorg, the [`OnchainTxHandler`] will act accordingly.
#[derive(PartialEq, Eq)]
#[derive(Clone, PartialEq, Eq)]
enum OnchainEvent {
/// A pending request has been claimed by a transaction spending the exact same set of outpoints
/// as the request. This claim can either be ours or from the counterparty. Once the claiming
Expand Down Expand Up @@ -179,6 +179,7 @@ impl Writeable for Option<Vec<Option<(usize, Signature)>>> {

#[cfg(anchors)]
/// The claim commonly referred to as the pre-signed second-stage HTLC transaction.
#[derive(Clone, PartialEq, Eq)]
pub(crate) struct ExternalHTLCClaim {
pub(crate) commitment_txid: Txid,
pub(crate) per_commitment_number: u64,
Expand All @@ -190,6 +191,7 @@ pub(crate) struct ExternalHTLCClaim {
// Represents the different types of claims for which events are yielded externally to satisfy said
// claims.
#[cfg(anchors)]
#[derive(Clone, PartialEq, Eq)]
pub(crate) enum ClaimEvent {
/// Event yielded to signal that the commitment transaction fee must be bumped to claim any
/// encumbered funds and proceed to HTLC resolution, if any HTLCs exist.
Expand Down Expand Up @@ -223,6 +225,7 @@ type PackageID = [u8; 32];

/// OnchainTxHandler receives claiming requests, aggregates them if it's sound, broadcast and
/// do RBF bumping if possible.
#[derive(Clone)]
pub struct OnchainTxHandler<ChannelSigner: WriteableEcdsaChannelSigner> {
destination_script: Script,
holder_commitment: HolderCommitmentTransaction,
Expand Down