Skip to content

Async recipient-side of static invoice server #3618

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

27 changes: 26 additions & 1 deletion fuzz/src/onion_message.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@ use lightning::ln::peer_handler::IgnoringMessageHandler;
use lightning::ln::script::ShutdownScript;
use lightning::offers::invoice::UnsignedBolt12Invoice;
use lightning::onion_message::async_payments::{
AsyncPaymentsMessageHandler, HeldHtlcAvailable, ReleaseHeldHtlc,
AsyncPaymentsMessageHandler, HeldHtlcAvailable, OfferPaths, OfferPathsRequest, ReleaseHeldHtlc,
ServeStaticInvoice, StaticInvoicePersisted,
};
use lightning::onion_message::messenger::{
CustomOnionMessageHandler, Destination, MessageRouter, MessageSendInstructions,
Expand Down Expand Up @@ -124,6 +125,30 @@ impl OffersMessageHandler for TestOffersMessageHandler {
struct TestAsyncPaymentsMessageHandler {}

impl AsyncPaymentsMessageHandler for TestAsyncPaymentsMessageHandler {
fn handle_offer_paths_request(
&self, _message: OfferPathsRequest, _context: AsyncPaymentsContext,
responder: Option<Responder>,
) -> Option<(OfferPaths, ResponseInstruction)> {
let responder = match responder {
Some(resp) => resp,
None => return None,
};
Some((OfferPaths { paths: Vec::new(), paths_absolute_expiry: None }, responder.respond()))
}
fn handle_offer_paths(
&self, _message: OfferPaths, _context: AsyncPaymentsContext, _responder: Option<Responder>,
) -> Option<(ServeStaticInvoice, ResponseInstruction)> {
None
}
fn handle_serve_static_invoice(
&self, _message: ServeStaticInvoice, _context: AsyncPaymentsContext,
_responder: Option<Responder>,
) {
}
fn handle_static_invoice_persisted(
&self, _message: StaticInvoicePersisted, _context: AsyncPaymentsContext,
) {
}
fn handle_held_htlc_available(
&self, _message: HeldHtlcAvailable, _context: AsyncPaymentsContext,
responder: Option<Responder>,
Expand Down
42 changes: 42 additions & 0 deletions lightning/src/blinded_path/message.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ use crate::ln::channelmanager::PaymentId;
use crate::ln::msgs::DecodeError;
use crate::ln::onion_utils;
use crate::offers::nonce::Nonce;
use crate::offers::offer::OfferId;
use crate::onion_message::packet::ControlTlvs;
use crate::routing::gossip::{NodeId, ReadOnlyNetworkGraph};
use crate::sign::{EntropySource, NodeSigner, Recipient};
Expand Down Expand Up @@ -404,6 +405,40 @@ pub enum OffersContext {
/// [`AsyncPaymentsMessage`]: crate::onion_message::async_payments::AsyncPaymentsMessage
#[derive(Clone, Debug)]
pub enum AsyncPaymentsContext {
/// Context used by a reply path to an [`OfferPathsRequest`], provided back to us as an async
/// recipient in corresponding [`OfferPaths`] messages from the static invoice server.
///
/// [`OfferPathsRequest`]: crate::onion_message::async_payments::OfferPathsRequest
/// [`OfferPaths`]: crate::onion_message::async_payments::OfferPaths
OfferPaths {
/// The time as duration since the Unix epoch at which this path expires and messages sent over
/// it should be ignored.
///
/// This avoids the situation where the [`OfferPaths`] message is very delayed and thus
/// outdated.
///
/// [`OfferPaths`]: crate::onion_message::async_payments::OfferPaths
path_absolute_expiry: core::time::Duration,
},
/// Context used by a reply path to a [`ServeStaticInvoice`] message, provided back to us in
/// corresponding [`StaticInvoicePersisted`] messages.
///
/// [`ServeStaticInvoice`]: crate::onion_message::async_payments::ServeStaticInvoice
/// [`StaticInvoicePersisted`]: crate::onion_message::async_payments::StaticInvoicePersisted
StaticInvoicePersisted {
/// The id of the offer in the cache corresponding to the [`StaticInvoice`] that has been
/// persisted. This invoice is now ready to be provided by the static invoice server in response
/// to [`InvoiceRequest`]s, so the corresponding offer can be marked as ready to receive
/// payments.
///
/// [`StaticInvoice`]: crate::offers::static_invoice::StaticInvoice
/// [`InvoiceRequest`]: crate::offers::invoice_request::InvoiceRequest
offer_id: OfferId,
/// The time as duration since the Unix epoch at which this path expires and messages sent over
/// it should be ignored. If we receive confirmation of an invoice over this path after its
/// expiry, it may be outdated and a new invoice update should be sent instead.
path_absolute_expiry: core::time::Duration,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than setting an expiry time, should we not just set the "creation time"? That way we can use it for invoice_confirmed_persisted_at rather than the time we receive the message. If we're worried about messages getting delayed, or what happens in that case, we need to know whether the message got delayed on its way to the LSP vs on the way back. What we really care about, AFAICT, is "when should we update the invoice currently persisted", which means we want to get invoice_confirmed_persisted_at set appropriately, since that's what we use. Thus, we really, AFAIC, want to just set that based on a field in the context here, and of course if we get a reply back with a context that's older than what we have persisted we really need to re-persist cause we have no idea what's persisted (or maybe just use the older value? since that'll drive a faster re-persist anyway)

},
/// Context contained within the reply [`BlindedMessagePath`] we put in outbound
/// [`HeldHtlcAvailable`] messages, provided back to us in corresponding [`ReleaseHeldHtlc`]
/// messages.
Expand Down Expand Up @@ -486,6 +521,13 @@ impl_writeable_tlv_based_enum!(AsyncPaymentsContext,
(2, hmac, required),
(4, path_absolute_expiry, required),
},
(2, OfferPaths) => {
(0, path_absolute_expiry, required),
},
(3, StaticInvoicePersisted) => {
(0, offer_id, required),
(2, path_absolute_expiry, required),
},
);

/// Contains a simple nonce for use in a blinded path's context.
Expand Down
143 changes: 140 additions & 3 deletions lightning/src/ln/channelmanager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@ use crate::ln::outbound_payment::{
StaleExpiration,
};
use crate::ln::types::ChannelId;
use crate::offers::async_receive_offer_cache::AsyncReceiveOfferCache;
use crate::offers::flow::OffersMessageFlow;
use crate::offers::invoice::{
Bolt12Invoice, DerivedSigningPubkey, InvoiceBuilder, DEFAULT_RELATIVE_EXPIRY,
Expand All @@ -98,7 +99,8 @@ use crate::offers::parse::Bolt12SemanticError;
use crate::offers::refund::Refund;
use crate::offers::signer;
use crate::onion_message::async_payments::{
AsyncPaymentsMessage, AsyncPaymentsMessageHandler, HeldHtlcAvailable, ReleaseHeldHtlc,
AsyncPaymentsMessage, AsyncPaymentsMessageHandler, HeldHtlcAvailable, OfferPaths,
OfferPathsRequest, ReleaseHeldHtlc, ServeStaticInvoice, StaticInvoicePersisted,
};
use crate::onion_message::dns_resolution::HumanReadableName;
use crate::onion_message::messenger::{
Expand Down Expand Up @@ -5246,6 +5248,30 @@ where
)
}

#[cfg(async_payments)]
fn check_refresh_async_receive_offer_cache(&self, timer_tick_occurred: bool) {
let peers = self.get_peers_for_blinded_path();
let channels = self.list_usable_channels();
let entropy = &*self.entropy_source;
let router = &*self.router;
let refresh_res = self.flow.check_refresh_async_receive_offer_cache(
peers,
channels,
entropy,
router,
timer_tick_occurred,
);
match refresh_res {
Err(()) => {
log_error!(
self.logger,
"Failed to create blinded paths when requesting async receive offer paths"
);
},
Ok(()) => {},
}
}

#[cfg(async_payments)]
fn initiate_async_payment(
&self, invoice: &StaticInvoice, payment_id: PaymentId,
Expand Down Expand Up @@ -7240,6 +7266,9 @@ where
duration_since_epoch, &self.pending_events
);

#[cfg(async_payments)]
self.check_refresh_async_receive_offer_cache(true);

// Technically we don't need to do this here, but if we have holding cell entries in a
// channel that need freeing, it's better to do that here and block a background task
// than block the message queueing pipeline.
Expand Down Expand Up @@ -10999,9 +11028,29 @@ where
#[cfg(c_bindings)]
create_refund_builder!(self, RefundMaybeWithDerivedMetadataBuilder);

/// Retrieve an [`Offer`] for receiving async payments as an often-offline recipient. Will only
/// return an offer if [`Self::set_paths_to_static_invoice_server`] was called and we succeeded in
/// interactively building a [`StaticInvoice`] with the static invoice server.
///
/// Useful for posting offers to receive payments later, such as posting an offer on a website.
#[cfg(async_payments)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely looks like too many async payments cfg directives to me.

pub fn get_async_receive_offer(&self) -> Result<Offer, ()> {
let (offer, needs_persist) = self.flow.get_async_receive_offer()?;
if needs_persist {
// We need to re-persist the cache if a fresh offer was just marked as used to ensure we
// continue to keep this offer's invoice updated and don't replace it with the server.
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
}
Ok(offer)
}

/// Create an offer for receiving async payments as an often-offline recipient.
///
/// Because we may be offline when the payer attempts to request an invoice, you MUST:
/// Instead of using this method, it is preferable to call
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we just drop these methods now? In general we probably don't expect anyone to call them, and if they're doing something super custom they can call the equivalent methods on the Flow with some pretty trivial boilerplate.

/// [`Self::set_paths_to_static_invoice_server`] and retrieve the automatically built offer via
/// [`Self::get_async_receive_offer`].
///
/// If you want to build the [`StaticInvoice`] manually using this method instead, you MUST:
/// 1. Provide at least 1 [`BlindedMessagePath`] terminating at an always-online node that will
/// serve the [`StaticInvoice`] created from this offer on our behalf.
/// 2. Use [`Self::create_static_invoice_builder`] to create a [`StaticInvoice`] from this
Expand All @@ -11018,6 +11067,10 @@ where
/// Creates a [`StaticInvoiceBuilder`] from the corresponding [`Offer`] and [`Nonce`] that were
/// created via [`Self::create_async_receive_offer_builder`]. If `relative_expiry` is unset, the
/// invoice's expiry will default to [`STATIC_INVOICE_DEFAULT_RELATIVE_EXPIRY`].
///
/// Instead of using this method to manually build the invoice, it is preferable to set
/// [`Self::set_paths_to_static_invoice_server`] and retrieve the automatically built offer via
/// [`Self::get_async_receive_offer`].
#[cfg(async_payments)]
pub fn create_static_invoice_builder<'a>(
&self, offer: &'a Offer, offer_nonce: Nonce, relative_expiry: Option<Duration>,
Expand Down Expand Up @@ -11053,6 +11106,22 @@ where
)
}

/// Sets the [`BlindedMessagePath`]s that we will use as an async recipient to interactively build
/// [`Offer`]s with a static invoice server, so the server can serve [`StaticInvoice`]s to payers
/// on our behalf when we're offline.
///
/// This method only needs to be called once when the server first takes on the recipient as a
/// client, or when the paths change, e.g. if the paths are set to expire at a particular time.
#[cfg(async_payments)]
pub fn set_paths_to_static_invoice_server(
&self, paths_to_static_invoice_server: Vec<BlindedMessagePath>,
) -> Result<(), ()> {
self.flow.set_paths_to_static_invoice_server(paths_to_static_invoice_server)?;

let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
Ok(())
}

/// Pays for an [`Offer`] using the given parameters by creating an [`InvoiceRequest`] and
/// enqueuing it to be sent via an onion message. [`ChannelManager`] will pay the actual
/// [`Bolt12Invoice`] once it is received.
Expand Down Expand Up @@ -11960,6 +12029,13 @@ where
return NotifyOption::SkipPersistHandleEvents;
//TODO: Also re-broadcast announcement_signatures
});

// While we usually refresh the AsyncReceiveOfferCache on a timer, we also want to start
// interactively building offers as soon as we can after startup. We can't start building offers
// until we have some peer connection(s) to send onion messages over, so as a minor optimization
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this true? If we're sending outbound messages don't we DC to the blinded path intro point, i.e. once we hit send the message will get queued until we have a connection (and will initiate a connection immediately if required)? Presumably we should do our initial request on set_paths_to_static_invoice_server rather than on peer-connect (also cause it might happen after the first peer-connect). Should let us remove the request_attempts tracking in the cache, too.

// refresh the cache when a peer connects.
#[cfg(async_payments)]
self.check_refresh_async_receive_offer_cache(false);
res
}

Expand Down Expand Up @@ -13374,6 +13450,64 @@ where
MR::Target: MessageRouter,
L::Target: Logger,
{
fn handle_offer_paths_request(
&self, _message: OfferPathsRequest, _context: AsyncPaymentsContext,
_responder: Option<Responder>,
) -> Option<(OfferPaths, ResponseInstruction)> {
None
}

fn handle_offer_paths(
&self, _message: OfferPaths, _context: AsyncPaymentsContext, _responder: Option<Responder>,
) -> Option<(ServeStaticInvoice, ResponseInstruction)> {
#[cfg(async_payments)]
{
let responder = match _responder {
Some(responder) => responder,
None => return None,
};
let (serve_static_invoice, reply_context) = match self.flow.handle_offer_paths(
_message,
_context,
responder.clone(),
self.get_peers_for_blinded_path(),
self.list_usable_channels(),
&*self.entropy_source,
&*self.router,
) {
Some((msg, ctx)) => (msg, ctx),
None => return None,
};

// We cached a new pending offer, so persist the cache.
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this reads like we're trying to prevent persistence during responder.respond_with_reply_path, which I don't think is intentional:

Suggested change
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
PersistenceNotifierGuard::notify_on_drop(self);


let response_instructions = responder.respond_with_reply_path(reply_context);
return Some((serve_static_invoice, response_instructions));
}

#[cfg(not(async_payments))]
return None;
}

fn handle_serve_static_invoice(
&self, _message: ServeStaticInvoice, _context: AsyncPaymentsContext,
_responder: Option<Responder>,
) {
}

fn handle_static_invoice_persisted(
&self, _message: StaticInvoicePersisted, _context: AsyncPaymentsContext,
) {
#[cfg(async_payments)]
{
let should_persist = self.flow.handle_static_invoice_persisted(_context);
if should_persist {
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
}
}
}

fn handle_held_htlc_available(
&self, _message: HeldHtlcAvailable, _context: AsyncPaymentsContext,
_responder: Option<Responder>,
Expand Down Expand Up @@ -14239,6 +14373,7 @@ where
(15, self.inbound_payment_id_secret, required),
(17, in_flight_monitor_updates, option),
(19, peer_storage_dir, optional_vec),
(21, self.flow.writeable_async_receive_offer_cache(), required),
});

Ok(())
Expand Down Expand Up @@ -14818,6 +14953,7 @@ where
let mut decode_update_add_htlcs: Option<HashMap<u64, Vec<msgs::UpdateAddHTLC>>> = None;
let mut inbound_payment_id_secret = None;
let mut peer_storage_dir: Option<Vec<(PublicKey, Vec<u8>)>> = None;
let mut async_receive_offer_cache: AsyncReceiveOfferCache = AsyncReceiveOfferCache::new();
read_tlv_fields!(reader, {
(1, pending_outbound_payments_no_retry, option),
(2, pending_intercepted_htlcs, option),
Expand All @@ -14835,6 +14971,7 @@ where
(15, inbound_payment_id_secret, option),
(17, in_flight_monitor_updates, option),
(19, peer_storage_dir, optional_vec),
(21, async_receive_offer_cache, (default_value, async_receive_offer_cache)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cache seems so far away from channel manager core responsibilities. Can't it be persisted elsewhere / separately?

I also thought that for fixing the force closures, we wanted to move away from channel mgr persistence completely and reconstruct everything from the monitors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it crossed my mind that this might come up when we address removing ChannelManager persistence :( Will start an offline discussion for this next week, not sure what the best solution is. We could have a separate method to retrieve the cache for persistence. Don't think this should be a blocker since it's an optional field that can easily be removed, but it'd be nice to have a path forward.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree that it can be removed easily. And also that it is nice to have an idea about where to take this eventually.

});
let mut decode_update_add_htlcs = decode_update_add_htlcs.unwrap_or_else(|| new_hash_map());
let peer_storage_dir: Vec<(PublicKey, Vec<u8>)> = peer_storage_dir.unwrap_or_else(Vec::new);
Expand Down Expand Up @@ -15521,7 +15658,7 @@ where
chain_hash, best_block, our_network_pubkey,
highest_seen_timestamp, expanded_inbound_key,
secp_ctx.clone(), args.message_router
);
).with_async_payments_offers_cache(async_receive_offer_cache);

let channel_manager = ChannelManager {
chain_hash,
Expand Down
2 changes: 1 addition & 1 deletion lightning/src/ln/inbound_payment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ pub fn create_from_hash(
}

#[cfg(async_payments)]
pub(super) fn create_for_spontaneous_payment(
pub(crate) fn create_for_spontaneous_payment(
keys: &ExpandedKey, min_value_msat: Option<u64>, invoice_expiry_delta_secs: u32,
current_time: u64, min_final_cltv_expiry_delta: Option<u16>,
) -> Result<PaymentSecret, ()> {
Expand Down
23 changes: 22 additions & 1 deletion lightning/src/ln/peer_handler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@ use crate::ln::types::ChannelId;
use crate::ln::wire;
use crate::ln::wire::{Encode, Type};
use crate::onion_message::async_payments::{
AsyncPaymentsMessageHandler, HeldHtlcAvailable, ReleaseHeldHtlc,
AsyncPaymentsMessageHandler, HeldHtlcAvailable, OfferPaths, OfferPathsRequest, ReleaseHeldHtlc,
ServeStaticInvoice, StaticInvoicePersisted,
};
use crate::onion_message::dns_resolution::{
DNSResolverMessage, DNSResolverMessageHandler, DNSSECProof, DNSSECQuery,
Expand Down Expand Up @@ -212,6 +213,26 @@ impl OffersMessageHandler for IgnoringMessageHandler {
}
}
impl AsyncPaymentsMessageHandler for IgnoringMessageHandler {
fn handle_offer_paths_request(
&self, _message: OfferPathsRequest, _context: AsyncPaymentsContext,
_responder: Option<Responder>,
) -> Option<(OfferPaths, ResponseInstruction)> {
None
}
fn handle_offer_paths(
&self, _message: OfferPaths, _context: AsyncPaymentsContext, _responder: Option<Responder>,
) -> Option<(ServeStaticInvoice, ResponseInstruction)> {
None
}
fn handle_serve_static_invoice(
&self, _message: ServeStaticInvoice, _context: AsyncPaymentsContext,
_responder: Option<Responder>,
) {
}
fn handle_static_invoice_persisted(
&self, _message: StaticInvoicePersisted, _context: AsyncPaymentsContext,
) {
}
fn handle_held_htlc_available(
&self, _message: HeldHtlcAvailable, _context: AsyncPaymentsContext,
_responder: Option<Responder>,
Expand Down
Loading
Loading