-
Notifications
You must be signed in to change notification settings - Fork 436
Description
Opening up this issue to seek design feedback on implementation of bolts/1280. We would like to implement the full mitigation in LDK and run it in “read only” mode:
- Set the accountable signal on HTLCs
- Log the action that the mitigation suggests (forward/fail)
- Take no action on HTLC forwards, so traffic is unaffected
TL;DR
The mitigation can be implemented in a standalone module, and will implement the trait described below to receive information from ChannelManager. By default it will log its forwarding decisions and reputation scores, and we will surface a config option to disable the mitigation.
pub trait ResourceManager {
fn add_channel(
&self, channel_type: &ChannelTypeFeatures, channel_id: u64,
max_htlc_value_in_flight_msat: u64, max_accepted_htlcs: u16,
) -> Result<(), ()>;
fn remove_channel(&self, channel_id: u64) -> Result<(), ()>;
fn add_htlc(
&self, incoming_channel_id: u64, incoming_amount_msat: u64, incoming_cltv_expiry: u32,
outgoing_channel_id: u64, outgoing_amount_msat: u64, incoming_accountable: bool,
htlc_id: u64, height_added: u32, added_at: u64,
) -> Result<ForwardingOutcome, ()>;
fn resolve_htlc(
&self, incoming_channel_id: u64, htlc_id: u64, settled: bool, resolved_at: u64,
) -> Result<(), ()>;
}
Approaches that rely on events or interception were also considered, but ultimately decided against because it’s unlikely (and unadvisable) that people will want to “bring their own” DOS protection.
Implementation Considerations
There are a few design decisions that we’d like to highlight in this work - open to feedback here!
-
Single lock on add/remove: our current approach has a single high level lock that will be acquired on htlc and channel add/remove(s). This seems acceptable because lightning is primarily IO bound, LDK currently has serial processing in
process_forward_htlcsand the operations in the resource manager are lightweight relative to the crypto to process HTLCs. Happy to pursue a similar locking structure in the resource manager if there are bottleneck concerns! -
O(n) HTLC storage: our proposed design maintains its own view of the current set of in-flight HTLCs, relying on
ChannelManagerto report updates to our state. This information is available inChannel’s pending/outbound_htlcs, but only adds ~100 bytes per HTLC so is considered worthwhile for the sake of a well abstracted, modular implementation. -
O(n^2) general bucket memory usage and hashing: a portion of our channel’s liquidity and slots are assigned to a “general” bucket which is available regardless of a node’s reputation status. We mimic the behavior of addrman in core:
- Each outgoing channel is assigned a set of slots/liquidity on the incoming channel that they are allowed to utilize in the general bucket of resources.
- These slots are decided by salting and hashing the incoming/outgoing channel pair.
Our design performs this assignment just in time, computing slots for an incoming–outgoing pair only when an HTLC is forwarded. The hashing involved is negligible compared to the cost an adversary would incur to exploit it, which would require onion construction and committing funds to send a payment.