Skip to content

[0.0.123] Don't attempt to resume channels if we already exchanged funding #3034

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions lightning/src/ln/channel.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7436,6 +7436,12 @@ impl<SP: Deref> OutboundV1Channel<SP> where SP::Target: SignerProvider {
Ok(self.get_open_channel(chain_hash))
}

/// Returns true if we can resume the channel by sending the [`msgs::OpenChannel`] again.
pub fn is_resumable(&self) -> bool {
!self.context.have_received_message() &&
self.context.cur_holder_commitment_transaction_number == INITIAL_COMMITMENT_NUMBER
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this method necessary? It seems like have_received_message() should cover it, I don't see how we could be beyond the initial commitment number if we've only just sent our init 🤔 I'm ok with it because it makes the callsite more readable, though.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I think they're redundant, I just figured we should do the thing that matches the two panics below exactly rather than trying to avoid the extra check. I also like having the callsites be more readable so went with a new method :).

}

pub fn get_open_channel(&self, chain_hash: ChainHash) -> msgs::OpenChannel {
if !self.context.is_outbound() {
panic!("Tried to open a channel for an inbound channel?");
Expand Down
13 changes: 8 additions & 5 deletions lightning/src/ln/channelmanager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9943,11 +9943,14 @@ where
}
&mut chan.context
},
// We retain UnfundedOutboundV1 channel for some time in case
// peer unexpectedly disconnects, and intends to reconnect again.
ChannelPhase::UnfundedOutboundV1(_) => {
return true;
},
// If we get disconnected and haven't yet committed to a funding
// transaction, we can replay the `open_channel` on reconnection, so don't
// bother dropping the channel here. However, if we already committed to
// the funding transaction we don't yet support replaying the funding
// handshake (and bailing if the peer rejects it), so we force-close in
// that case.
ChannelPhase::UnfundedOutboundV1(chan) if chan.is_resumable() => return true,
ChannelPhase::UnfundedOutboundV1(chan) => &mut chan.context,
// Unfunded inbound channels will always be removed.
ChannelPhase::UnfundedInboundV1(chan) => {
&mut chan.context
Expand Down
40 changes: 36 additions & 4 deletions lightning/src/ln/functional_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,38 @@ use crate::ln::chan_utils::CommitmentTransaction;

use super::channel::UNFUNDED_CHANNEL_AGE_LIMIT_TICKS;

#[test]
fn test_channel_resumption_fail_post_funding() {
// If we fail to exchange funding with a peer prior to it disconnecting we'll resume the
// channel open on reconnect, however if we do exchange funding we do not currently support
// replaying it and here test that the channel closes.
let chanmon_cfgs = create_chanmon_cfgs(2);
let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);

nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 0, 42, None, None).unwrap();
let open_chan = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_chan);
let accept_chan = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &accept_chan);

let (temp_chan_id, tx, funding_output) =
create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 1_000_000, 42);
let new_chan_id = ChannelId::v1_from_funding_outpoint(funding_output);
nodes[0].node.funding_transaction_generated(&temp_chan_id, &nodes[1].node.get_our_node_id(), tx).unwrap();

nodes[0].node.peer_disconnected(&nodes[1].node.get_our_node_id());
check_closed_events(&nodes[0], &[ExpectedCloseEvent::from_id_reason(new_chan_id, true, ClosureReason::DisconnectedPeer)]);

// After ddf75afd16 we'd panic on reconnection if we exchanged funding info, so test that
// explicitly here.
nodes[0].node.peer_connected(&nodes[1].node.get_our_node_id(), &msgs::Init {
features: nodes[1].node.init_features(), networks: None, remote_network_address: None
}, true).unwrap();
assert_eq!(nodes[0].node.get_and_clear_pending_msg_events(), Vec::new());
}

#[test]
fn test_insane_channel_opens() {
// Stand up a network of 2 nodes
Expand Down Expand Up @@ -3734,10 +3766,10 @@ fn test_peer_disconnected_before_funding_broadcasted() {
nodes[0].node.timer_tick_occurred();
}

// Ensure that the channel is closed with `ClosureReason::HolderForceClosed`
// when the peers are disconnected and do not reconnect before the funding
// transaction is broadcasted.
check_closed_event!(&nodes[0], 2, ClosureReason::HolderForceClosed, true
// Ensure that the channel is closed with `ClosureReason::DisconnectedPeer` and a
// `DiscardFunding` event when the peers are disconnected and do not reconnect before the
// funding transaction is broadcasted.
check_closed_event!(&nodes[0], 2, ClosureReason::DisconnectedPeer, true
, [nodes[1].node.get_our_node_id()], 1000000);
check_closed_event!(&nodes[1], 1, ClosureReason::DisconnectedPeer, false
, [nodes[0].node.get_our_node_id()], 1000000);
Expand Down
Loading