Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct and update confirmation target constant definitions #3608

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 11 additions & 19 deletions lightning/src/chain/channelmonitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -223,16 +223,20 @@ impl_writeable_tlv_based!(HTLCUpdate, {
/// transaction.
pub(crate) const COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE: u32 = 12;

/// When we go to force-close a channel because an HTLC is expiring, we should ensure that the
/// HTLC(s) expiring are not considered pinnable, allowing us to aggregate them with other HTLC(s)
/// expiring at the same time.
const _: () = assert!(CLTV_CLAIM_BUFFER > COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE);
/// When we go to force-close a channel because an HTLC is expiring, by the time we've gotten the
/// commitment transaction confirmed, we should ensure that the HTLC(s) expiring are not considered
/// pinnable, allowing us to aggregate them with other HTLC(s) expiring at the same time.
const _: () = assert!(MAX_BLOCKS_FOR_CONF > COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE);

/// The upper bound on how many blocks we think it can take for us to get a transaction confirmed.
pub(crate) const MAX_BLOCKS_FOR_CONF: u32 = 18;

/// If an HTLC expires within this many blocks, force-close the channel to broadcast the
/// HTLC-Success transaction.
Comment on lines 234 to 235
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"HTLC-Success transaction" phrasing seems to suggest this const is only used in the context of channels with inbound HTLC(s) where we have the preimage. But it seems to be used for inbounds where we don't have the preimage as well, and/or other contexts? I wonder if this could be clarified?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not directly? Its only used directly as !htlc_outbound && htlc.cltv_expiry <= height + CLTV_CLAIM_BUFFER && self.payment_preimages.contains_key(&htlc.payment_hash) and indirectly to calculate HTLC_FAIL_BACK_BUFFER which is the same concept.

/// In other words, this is an upper bound on how many blocks we think it can take us to get a
/// transaction confirmed (and we use it in a few more, equivalent, places).
pub(crate) const CLTV_CLAIM_BUFFER: u32 = 18;
///
/// This is two times [`MAX_BLOCKS_FOR_CONF`] as we need to first get the commitment transaction
/// confirmed, then get an HTLC transaction confirmed.
pub(crate) const CLTV_CLAIM_BUFFER: u32 = MAX_BLOCKS_FOR_CONF * 2;
/// Number of blocks by which point we expect our counterparty to have seen new blocks on the
/// network and done a full update_fail_htlc/commitment_signed dance (+ we've updated all our
/// copies of ChannelMonitors, including watchtowers). We could enforce the contract by failing
Expand Down Expand Up @@ -4511,18 +4515,6 @@ impl<Signer: EcdsaChannelSigner> ChannelMonitorImpl<Signer> {
// chain when our counterparty is waiting for expiration to off-chain fail an HTLC
// we give ourselves a few blocks of headroom after expiration before going
// on-chain for an expired HTLC.
// Note that, to avoid a potential attack whereby a node delays claiming an HTLC
// from us until we've reached the point where we go on-chain with the
// corresponding inbound HTLC, we must ensure that outbound HTLCs go on chain at
// least CLTV_CLAIM_BUFFER blocks prior to the inbound HTLC.
// aka outbound_cltv + LATENCY_GRACE_PERIOD_BLOCKS == height - CLTV_CLAIM_BUFFER
// inbound_cltv == height + CLTV_CLAIM_BUFFER
// outbound_cltv + LATENCY_GRACE_PERIOD_BLOCKS + CLTV_CLAIM_BUFFER <= inbound_cltv - CLTV_CLAIM_BUFFER
// LATENCY_GRACE_PERIOD_BLOCKS + 2*CLTV_CLAIM_BUFFER <= inbound_cltv - outbound_cltv
// CLTV_EXPIRY_DELTA <= inbound_cltv - outbound_cltv (by check in ChannelManager::decode_update_add_htlc_onion)
// LATENCY_GRACE_PERIOD_BLOCKS + 2*CLTV_CLAIM_BUFFER <= CLTV_EXPIRY_DELTA
// The final, above, condition is checked for statically in channelmanager
// with CHECK_CLTV_EXPIRY_SANITY_2.
let htlc_outbound = $holder_tx == htlc.offered;
if ( htlc_outbound && htlc.cltv_expiry + LATENCY_GRACE_PERIOD_BLOCKS <= height) ||
(!htlc_outbound && htlc.cltv_expiry <= height + CLTV_CLAIM_BUFFER && self.payment_preimages.contains_key(&htlc.payment_hash)) {
Expand Down
19 changes: 19 additions & 0 deletions lightning/src/ln/async_payments_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -540,6 +540,13 @@ fn async_receive_mpp() {
create_announced_chan_between_nodes(&nodes, 0, 2);
create_unannounced_chan_between_nodes_with_value(&nodes, 1, 3, 1_000_000, 0);
create_unannounced_chan_between_nodes_with_value(&nodes, 2, 3, 1_000_000, 0);

// Ensure all nodes start at the same height.
connect_blocks(&nodes[0], 4 * CHAN_CONFIRM_DEPTH + 1 - nodes[0].best_block_info().1);
connect_blocks(&nodes[1], 4 * CHAN_CONFIRM_DEPTH + 1 - nodes[1].best_block_info().1);
connect_blocks(&nodes[2], 4 * CHAN_CONFIRM_DEPTH + 1 - nodes[2].best_block_info().1);
connect_blocks(&nodes[3], 4 * CHAN_CONFIRM_DEPTH + 1 - nodes[3].best_block_info().1);

let (offer, static_invoice) = create_static_invoice(&nodes[1], &nodes[3], None, &secp_ctx);

// In other tests we hardcode the sender's random bytes so we can predict the keysend preimage to
Expand Down Expand Up @@ -622,6 +629,12 @@ fn amount_doesnt_match_invreq() {
create_unannounced_chan_between_nodes_with_value(&nodes, 1, 3, 1_000_000, 0);
create_unannounced_chan_between_nodes_with_value(&nodes, 2, 3, 1_000_000, 0);

// Ensure all nodes start at the same height.
connect_blocks(&nodes[0], 4 * CHAN_CONFIRM_DEPTH + 1 - nodes[0].best_block_info().1);
connect_blocks(&nodes[1], 4 * CHAN_CONFIRM_DEPTH + 1 - nodes[1].best_block_info().1);
connect_blocks(&nodes[2], 4 * CHAN_CONFIRM_DEPTH + 1 - nodes[2].best_block_info().1);
connect_blocks(&nodes[3], 4 * CHAN_CONFIRM_DEPTH + 1 - nodes[3].best_block_info().1);

let (offer, static_invoice) = create_static_invoice(&nodes[1], &nodes[3], None, &secp_ctx);

// Set the random bytes so we can predict the payment preimage and hash.
Expand Down Expand Up @@ -815,9 +828,15 @@ fn invalid_async_receive_with_retry<F1, F2>(
let node_chanmgrs =
create_node_chanmgrs(3, &node_cfgs, &[None, Some(allow_priv_chan_fwds_cfg), None]);
let nodes = create_network(3, &node_cfgs, &node_chanmgrs);

create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1_000_000, 0);
create_unannounced_chan_between_nodes_with_value(&nodes, 1, 2, 1_000_000, 0);

// Ensure all nodes start at the same height.
connect_blocks(&nodes[0], 2 * CHAN_CONFIRM_DEPTH + 1 - nodes[0].best_block_info().1);
connect_blocks(&nodes[1], 2 * CHAN_CONFIRM_DEPTH + 1 - nodes[1].best_block_info().1);
connect_blocks(&nodes[2], 2 * CHAN_CONFIRM_DEPTH + 1 - nodes[2].best_block_info().1);

let blinded_paths_to_always_online_node = nodes[1]
.message_router
.create_blinded_paths(
Expand Down
20 changes: 20 additions & 0 deletions lightning/src/ln/blinded_payment_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -204,6 +204,12 @@ fn mpp_to_one_hop_blinded_path() {
let chan_upd_1_3 = create_announced_chan_between_nodes(&nodes, 1, 3).0.contents;
create_announced_chan_between_nodes(&nodes, 2, 3).0.contents;

// Ensure all nodes start at the same height.
connect_blocks(&nodes[0], 4*CHAN_CONFIRM_DEPTH + 1 - nodes[0].best_block_info().1);
connect_blocks(&nodes[1], 4*CHAN_CONFIRM_DEPTH + 1 - nodes[1].best_block_info().1);
connect_blocks(&nodes[2], 4*CHAN_CONFIRM_DEPTH + 1 - nodes[2].best_block_info().1);
connect_blocks(&nodes[3], 4*CHAN_CONFIRM_DEPTH + 1 - nodes[3].best_block_info().1);

let amt_msat = 15_000_000;
let (payment_preimage, payment_hash, payment_secret) = get_payment_preimage_hash(&nodes[3], Some(amt_msat), None);
let payee_tlvs = UnauthenticatedReceiveTlvs {
Expand Down Expand Up @@ -267,6 +273,14 @@ fn mpp_to_three_hop_blinded_paths() {
let chan_upd_3_5 = create_announced_chan_between_nodes(&nodes, 3, 5).0.contents;
let chan_upd_4_5 = create_announced_chan_between_nodes(&nodes, 4, 5).0.contents;

// Start every node on the same block height to make reasoning about timeouts easier
connect_blocks(&nodes[0], 6*CHAN_CONFIRM_DEPTH + 1 - nodes[0].best_block_info().1);
connect_blocks(&nodes[1], 6*CHAN_CONFIRM_DEPTH + 1 - nodes[1].best_block_info().1);
connect_blocks(&nodes[2], 6*CHAN_CONFIRM_DEPTH + 1 - nodes[2].best_block_info().1);
connect_blocks(&nodes[3], 6*CHAN_CONFIRM_DEPTH + 1 - nodes[3].best_block_info().1);
connect_blocks(&nodes[4], 6*CHAN_CONFIRM_DEPTH + 1 - nodes[4].best_block_info().1);
connect_blocks(&nodes[5], 6*CHAN_CONFIRM_DEPTH + 1 - nodes[5].best_block_info().1);

let amt_msat = 15_000_000;
let (payment_preimage, payment_hash, payment_secret) = get_payment_preimage_hash(&nodes[5], Some(amt_msat), None);
let route_params = {
Expand Down Expand Up @@ -1070,6 +1084,12 @@ fn blinded_path_retries() {
let chan_1_3 = create_announced_chan_between_nodes_with_value(&nodes, 1, 3, 1_000_000, 0);
let chan_2_3 = create_announced_chan_between_nodes_with_value(&nodes, 2, 3, 1_000_000, 0);

// Ensure all nodes start at the same height.
connect_blocks(&nodes[0], 4*CHAN_CONFIRM_DEPTH + 1 - nodes[0].best_block_info().1);
connect_blocks(&nodes[1], 4*CHAN_CONFIRM_DEPTH + 1 - nodes[1].best_block_info().1);
connect_blocks(&nodes[2], 4*CHAN_CONFIRM_DEPTH + 1 - nodes[2].best_block_info().1);
connect_blocks(&nodes[3], 4*CHAN_CONFIRM_DEPTH + 1 - nodes[3].best_block_info().1);

let amt_msat = 5000;
let (_, payment_hash, payment_secret) = get_payment_preimage_hash(&nodes[3], Some(amt_msat), None);
let route_params = {
Expand Down
49 changes: 32 additions & 17 deletions lightning/src/ln/channelmanager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ use crate::blinded_path::payment::{AsyncBolt12OfferContext, BlindedPaymentPath,
use crate::chain;
use crate::chain::{Confirm, ChannelMonitorUpdateStatus, Watch, BestBlock};
use crate::chain::chaininterface::{BroadcasterInterface, ConfirmationTarget, FeeEstimator, LowerBoundedFeeEstimator};
use crate::chain::channelmonitor::{Balance, ChannelMonitor, ChannelMonitorUpdate, WithChannelMonitor, ChannelMonitorUpdateStep, HTLC_FAIL_BACK_BUFFER, CLTV_CLAIM_BUFFER, LATENCY_GRACE_PERIOD_BLOCKS, ANTI_REORG_DELAY, MonitorEvent};
use crate::chain::channelmonitor::{Balance, ChannelMonitor, ChannelMonitorUpdate, WithChannelMonitor, ChannelMonitorUpdateStep, HTLC_FAIL_BACK_BUFFER, MAX_BLOCKS_FOR_CONF, CLTV_CLAIM_BUFFER, LATENCY_GRACE_PERIOD_BLOCKS, ANTI_REORG_DELAY, MonitorEvent};
use crate::chain::transaction::{OutPoint, TransactionData};
use crate::events::{self, Event, EventHandler, EventsProvider, InboundChannelFunds, ClosureReason, HTLCDestination, PaymentFailureReason, ReplayEvent};
// Since this struct is returned in `list_channels` methods, expose it here in case users want to
Expand Down Expand Up @@ -2824,7 +2824,7 @@ pub const BREAKDOWN_TIMEOUT: u16 = 6 * 24;
pub(crate) const MAX_LOCAL_BREAKDOWN_TIMEOUT: u16 = 2 * 6 * 24 * 7;

/// The minimum number of blocks between an inbound HTLC's CLTV and the corresponding outbound
/// HTLC's CLTV. The current default represents roughly seven hours of blocks at six blocks/hour.
/// HTLC's CLTV. The current default represents roughly eight hours of blocks at six blocks/hour.
///
/// This can be increased (but not decreased) through [`ChannelConfig::cltv_expiry_delta`]
///
Expand All @@ -2833,7 +2833,7 @@ pub(crate) const MAX_LOCAL_BREAKDOWN_TIMEOUT: u16 = 2 * 6 * 24 * 7;
// i.e. the node we forwarded the payment on to should always have enough room to reliably time out
// the HTLC via a full update_fail_htlc/commitment_signed dance before we hit the
// CLTV_CLAIM_BUFFER point (we static assert that it's at least 3 blocks more).
pub const MIN_CLTV_EXPIRY_DELTA: u16 = 6*7;
pub const MIN_CLTV_EXPIRY_DELTA: u16 = 6*8;
// This should be long enough to allow a payment path drawn across multiple routing hops with substantial
// `cltv_expiry_delta`. Indeed, the length of those values is the reaction delay offered to a routing node
// in case of HTLC on-chain settlement. While appearing less competitive, a node operator could decide to
Expand All @@ -2850,19 +2850,34 @@ pub(super) const CLTV_FAR_FAR_AWAY: u32 = 14 * 24 * 6;
// a payment was being routed, so we add an extra block to be safe.
pub const MIN_FINAL_CLTV_EXPIRY_DELTA: u16 = HTLC_FAIL_BACK_BUFFER as u16 + 3;

// Check that our CLTV_EXPIRY is at least CLTV_CLAIM_BUFFER + ANTI_REORG_DELAY + LATENCY_GRACE_PERIOD_BLOCKS,
// ie that if the next-hop peer fails the HTLC within
// LATENCY_GRACE_PERIOD_BLOCKS then we'll still have CLTV_CLAIM_BUFFER left to timeout it onchain,
// then waiting ANTI_REORG_DELAY to be reorg-safe on the outbound HLTC and
// failing the corresponding htlc backward, and us now seeing the last block of ANTI_REORG_DELAY before
// LATENCY_GRACE_PERIOD_BLOCKS.
#[allow(dead_code)]
const CHECK_CLTV_EXPIRY_SANITY: u32 = MIN_CLTV_EXPIRY_DELTA as u32 - LATENCY_GRACE_PERIOD_BLOCKS - CLTV_CLAIM_BUFFER - ANTI_REORG_DELAY - LATENCY_GRACE_PERIOD_BLOCKS;
// Check that our MIN_CLTV_EXPIRY_DELTA gives us enough time to get everything on chain and locked
// in with enough time left to fail the corresponding HTLC back to our inbound edge before they
// force-close on us.
// In other words, if the next-hop peer fails HTLC LATENCY_GRACE_PERIOD_BLOCKS after our
// CLTV_CLAIM_BUFFER (because that's how many blocks we allow them after expiry), we'll still have
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/after expiry/before expiry?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No after. We FC a channel several blocks after an HTLC expires.

// 2*MAX_BLOCKS_FOR_CONF + ANTI_REORG_DELAY left to get two transactions on chain and the second
// fully locked in before the peer force-closes on us (LATENCY_GRACE_PERIOD_BLOCKS before the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion:

Suggested change
// fully locked in before the peer force-closes on us (LATENCY_GRACE_PERIOD_BLOCKS before the
// fully locked in before the inbound peer force-closes on us (LATENCY_GRACE_PERIOD_BLOCKS before the

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That reads kinda confusing to me? Makes it sound like the channel was inbound from the peer, rather than the HTLC was relayed inbound from the peer?

// expiry, i.e. assuming the peer force-closes right at the expiry and we're behind by
// LATENCY_GRACE_PERIOD_BLOCKS).
const _CHECK_CLTV_EXPIRY_SANITY: () = assert!(
MIN_CLTV_EXPIRY_DELTA as u32 >= 2*LATENCY_GRACE_PERIOD_BLOCKS + 2*MAX_BLOCKS_FOR_CONF + ANTI_REORG_DELAY
);

// Check that our MIN_CLTV_EXPIRY_DELTA gives us enough time to get the HTLC preimage back to our
// counterparty if the outbound edge gives us the preimage only one block before we'd force-close
// the channel.
// ie they provide the preimage LATENCY_GRACE_PERIOD_BLOCKS - 1 after the HTLC expires, then we
// pass the preimage back, which takes LATENCY_GRACE_PERIOD_BLOCKS to complete, and we want to make
// sure this all happens at least N blocks before the inbound HTLC expires (where N is the
// counterparty's CLTV_CLAIM_BUFFER or equivalent).
const _ASSUMED_COUNTERPARTY_CLTV_CLAIM_BUFFER: u32 = 6 * 6;

// Check for ability of an attacker to make us fail on-chain by delaying an HTLC claim. See
// ChannelMonitor::should_broadcast_holder_commitment_txn for a description of why this is needed.
#[allow(dead_code)]
const CHECK_CLTV_EXPIRY_SANITY_2: u32 = MIN_CLTV_EXPIRY_DELTA as u32 - LATENCY_GRACE_PERIOD_BLOCKS - 2*CLTV_CLAIM_BUFFER;
const _CHECK_COUNTERPARTY_REALISTIC: () =
assert!(_ASSUMED_COUNTERPARTY_CLTV_CLAIM_BUFFER >= CLTV_CLAIM_BUFFER);

const _CHECK_CLTV_EXPIRY_OFFCHAIN: () = assert!(
MIN_CLTV_EXPIRY_DELTA as u32 >= 2*LATENCY_GRACE_PERIOD_BLOCKS - 1 + _ASSUMED_COUNTERPARTY_CLTV_CLAIM_BUFFER
);

/// The number of ticks of [`ChannelManager::timer_tick_occurred`] until expiry of incomplete MPPs
pub(crate) const MPP_TIMEOUT_TICKS: u8 = 3;
Expand Down Expand Up @@ -15979,15 +15994,15 @@ mod tests {
let current_height: u32 = node[0].node.best_block.read().unwrap().height;
let result = create_recv_pending_htlc_info(msgs::InboundOnionPayload::Receive(msgs::InboundOnionReceivePayload {
sender_intended_htlc_amt_msat: 100,
cltv_expiry_height: 22,
cltv_expiry_height: TEST_FINAL_CLTV,
payment_metadata: None,
keysend_preimage: None,
payment_data: Some(msgs::FinalOnionHopData {
payment_secret: PaymentSecret([0; 32]),
total_msat: 100,
}),
custom_tlvs: Vec::new(),
}), [0; 32], PaymentHash([0; 32]), 100, 23, None, true, None, current_height);
}), [0; 32], PaymentHash([0; 32]), 100, TEST_FINAL_CLTV + 1, None, true, None, current_height);

// Should not return an error as this condition:
// https://github.com/lightning/bolts/blob/4dcc377209509b13cf89a4b91fde7d478f5b46d8/04-onion-routing.md?plain=1#L334
Expand Down
Loading
Loading