-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define anchor channel reserve requirements #3487
Define anchor channel reserve requirements #3487
Conversation
a74fcb8
to
6e59174
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work getting this started, this has been a major pain point when enabling anchor outputs.
// - 72 bytes for the signature | ||
// - 1 byte for the public key length | ||
// - 33 bytes for the public key | ||
const P2WPKH_INPUT_WEIGHT: u64 = (36 + 4 + 1) * WITNESS_SCALE_FACTOR as u64 + (1 + 1 + 72 + 1 + 33); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have this and some of the other estimates around the codebase, it'd be nice to just have one copy of each and reuse them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely agree, it would be great to unify these across the codebase. They are currently a bit spread out, and some seem to estimate a lower bound, even though they use a 73 byte signature length. Perhaps I can take a look at refactoring this as a separate PR?
pub expected_accepted_htlcs: u16, | ||
/// Whether the wallet providing the anchor channel reserve uses Taproot P2TR outputs for its | ||
/// funds, or Segwit P2WPKH outputs otherwise. | ||
pub taproot_wallet: bool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about wallets that have both output types while they're still migrating to P2TR only?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I modified it so the satisfaction_weight
s of the Utxo
s are always taken into account, replacing the weight of one input in the multi-stage transactions. This parameter can then be considered the output type of newly created (change) outputs only.
pub fn get_reserve_per_channel(context: &AnchorChannelReserveContext) -> Amount { | ||
let weight = Weight::from_wu( | ||
COMMITMENT_TRANSACTION_BASE_WEIGHT + | ||
// Reserves are calculated assuming each accepted HTLC is forwarded as the upper bound. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, the forwarded HTLC will be in a different channel though, so isn't this overestimating by 2x when looking at all channels in aggregate? Perhaps it's better to describe this as accounting for N outbound and N inbound HTLCs per channel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The math throughout seems correct but the comments are a bit confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think we're describing the same situation: there will be N inbound/accepted HTLCs per channel which will be forwarded to other channels. This will result in N outbound/offered HTLCs per channel in aggregate.
I cleaned up the comments a bit but let me know if there is a better phrasing. I mainly wanted to emphasize the relationship of the parameter with ChannelHandshakeConfig::our_max_accepted_htlcs
.
anchor_channels_with_balance.insert(channel_id); | ||
} | ||
} | ||
// Count channels that are in the middle of negotiation as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about those that have been negotiated but don't have a monitor yet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah you're right. Added accounting for channels in between negotiation and signing as well.
if channel_monitor.channel_type_features().supports_anchors_zero_fee_htlc_tx() | ||
&& !channel_monitor.get_claimable_balances().is_empty() | ||
{ | ||
anchor_channels_with_balance.insert(channel_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should only have one monitor per channel so the hash set seems redundant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assumed that with splicing, there would be multiple monitors per channel as the monitors seemed to be stored per OutPoint
. Let me know if I'm wrong there though. It might still be useful to resolve channels between ChannelManager
and ChannelMonitor
s now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK.
I was just discussing with @TheBlueMatt the need to document best practices for UTXO reserves, and here is a PR that actually calculates some reasonable values. Amazing!
// We require disjoint sets of UTXOs for the reserve of each channel, | ||
// as claims are only aggregated per channel currently. | ||
// | ||
// UTXOs larger than the required reserve are a singleton disjoint set. | ||
// A disjoint set of fractional UTXOs could overcontribute by any amount less than the | ||
// required reserve, approaching double the reserve. | ||
// | ||
// Note that for the fractional UTXOs, this is an approximation as we can't efficiently calculate | ||
// a worst-case coin selection as an NP-complete problem. | ||
num_whole_utxos + total_fractional_amount.to_sat() / reserve_per_channel.to_sat() / 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This calculation assumes that all HTLCs for each channel are claimed in a single transaction, which is probably reasonable most of the time.
But do note that differing CLTVs currently prevent HTLC claim aggregation and can cause significantly more UTXOs to be needed in the worst case. A malicious counterparty can certainly force this worst case in an effort to exhaust UTXOs and steal funds.
If we assume each HTLC claim takes X
blocks to confirm, we would need at least X
UTXOs per channel to handle the worst case. Perhaps we should add an expected_tx_confirmation_delay
parameter to the context, which indicates how long we expect transactions to be in the mempool before they confirm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely agree! The CLTV expiries limit the aggregation of HTLC transactions, and the number of UTXOs constrains the concurrency of the claim transactions. Combined with a confirmation time of more than 1 block, this can result in delays in getting everything on-chain.
I actually had this as a parameter during initial calculations, but ended up not including it for a number of reasons:
- Probabilistically/hopefully, not all counterparties will be malicious. UTXOs for channels that are not in the process of unilaterally closing can be used.
- The
ConfirmationTarget::UrgentOnchainSweep
fee rate estimate should be set relatively high, although we don't document an expected confirmation time outside of CLTV_CLAIM_BUFFER as an upper bound. - Requiring multiple UTXOs per channel is more expensive for the user as they have to pay a fragmentation cost up-front. I did debate dynamically splitting UTXOs in the anchor output spend transaction when required at some point.
I also wanted to provide a recommendation that was as simple as possible for any users.
What do you think?
Perhaps we can include the parameter in the future when claim transactions can be aggregated across channels. At that point, we only would need X
UTXOs in total.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually had this as a parameter during initial calculations, but ended up not including it for a number of reasons:
* Probabilistically/hopefully, not all counterparties will be malicious. UTXOs for channels that are not in the process of unilaterally closing can be used. * The `ConfirmationTarget::UrgentOnchainSweep` fee rate estimate should be set relatively high, although we don't document an expected confirmation time outside of [CLTV_CLAIM_BUFFER](https://github.com/lightningdevkit/rust-lightning/blob/86308e19e0f4160d95a3a40767cc8a570765d317/lightning/src/chain/channelmonitor.rs#L233) as an upper bound.
Yeah, UrgentOnchainSweep
needs better documentation. Or better yet, we should do #3527.
* Requiring multiple UTXOs per channel is more expensive for the user as they have to pay a fragmentation cost up-front. I did debate dynamically splitting UTXOs in the anchor output spend transaction when required at some point.
If we do that then I think we can end up violating the CPFP carve out, which increases the pinning attack surface.
I also wanted to provide a recommendation that was as simple as possible for any users.
What do you think?
Perhaps we can include the parameter in the future when claim transactions can be aggregated across channels. At that point, we only would need
X
UTXOs in total.
This seems reasonable, but I think we should at least document these decisions so the user is fully aware.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do that then I think we can end up violating the CPFP carve out, which increases the pinning attack surface.
In what way? I believe the only unconfirmed input to the anchor output spend transaction would be the anchor output, as LDK will only select confirmed UTXOs to provide funds for the relevant transactions. We would split larger confirmed UTXOs in the anchor output spend transaction depending on the number of in-flight HTLCs. The split UTXOs can then be used once confirmed, every non-anchor output on the commitment transaction should have a CSV lock anyway.
This seems reasonable, but I think we should at least document these decisions so the user is fully aware.
Added these assumptions in the documentation for get_reserve_per_channel
.
at least X UTXOs per channel
I was thinking a bit more about the verification of the UTXO shape here for the future. The algorithm to find the value that can be provided with the required concurrency guarantees is interesting.
If there is a reserve R
to get 20 transactions confirmed, and the confirmation time is 5 blocks, there is a variety of ways to provide sufficient UTXOs e.g.:
- 5 UTXOs of value
R / 5
- 20 UTXOs of value
R / 20
- Anything in between
But not:
- 1 UTXO of value
R
and 5 UTXOs of valueR / 20
. This can only provide concurrency to get ~6 transactions confirmed without waiting.
It won't be too difficult to verify, but it might be complicated to explain to the users. Or we would just require uniform UTXOs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do that then I think we can end up violating the CPFP carve out, which increases the pinning attack surface.
In what way? I believe the only unconfirmed input to the anchor output spend transaction would be the anchor output, as LDK will only select confirmed UTXOs to provide funds for the relevant transactions. We would split larger confirmed UTXOs in the anchor output spend transaction depending on the number of in-flight HTLCs. The split UTXOs can then be used once confirmed, every non-anchor output on the commitment transaction should have a CSV lock anyway.
Ah, I misunderstood what you were suggesting. We don't want to split UTXOs before the anchor spend (and therefore have another unconfirmed ancestor), but it should be OK to split them in the anchor spend itself. That's a neat idea that may be worth implementing in the future.
I was thinking a bit more about the verification of the UTXO shape here for the future. The algorithm to find the value that can be provided with the required concurrency guarantees is interesting.
...
It won't be too difficult to verify, but it might be complicated to explain to the users. Or we would just require uniform UTXOs.
It will be more documentation, but I don't think it will be too hard to explain the requirements. The user just needs to understand that the reserve needs to be partitioned into 5 disjoint sets and that the smallest of the sets is the limiting factor on number of channels.
It might make sense to suggest uniform UTXOs in the future get_required_reserve
documentation, but I don't think we should impose a uniform requirement on the get_supportable_anchor_channels
side.
6e59174
to
da80c28
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3487 +/- ##
==========================================
- Coverage 88.55% 88.50% -0.06%
==========================================
Files 149 150 +1
Lines 114944 115171 +227
Branches 114944 115171 +227
==========================================
+ Hits 101794 101932 +138
- Misses 10663 10746 +83
- Partials 2487 2493 +6 ☔ View full report in Codecov by Sentry. |
da80c28
to
21dc5e7
Compare
impl Default for AnchorChannelReserveContext { | ||
fn default() -> Self { | ||
AnchorChannelReserveContext { | ||
upper_bound_fee_rate: FeeRate::from_sat_per_kwu(50 * 250), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can avoid the manual conversion to sat/kw.
upper_bound_fee_rate: FeeRate::from_sat_per_kwu(50 * 250), | |
upper_bound_fee_rate: FeeRate::from_sat_per_vb(50).unwrap(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mainly wanted to avoid runtime unwrapping in favor of compilation-time verification here.
htlc_success_transaction_weight(context) * (context.expected_accepted_htlcs as u64) + | ||
htlc_timeout_transaction_weight(context) * (context.expected_accepted_htlcs as u64), | ||
); | ||
context.upper_bound_fee_rate.fee_wu(weight).unwrap_or(Amount::MAX) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If overflow actually happens something's not right, and it's going to mess up most of the calculations anyway.
Maybe we should be returning errors throughout instead of silently swallowing them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exposed the overflow errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, missed this. but, yea, not sure why we really need to expose overflows - its the same as just saying "sorry, your numbers are too high, need 21M BTC". A user isn't going to be particularly confused by seeing "21M BTC" and more error types means more work for them to handle it and do something useful with it (which they almost certainly won't, they'll just do the unwrap_or
themselves).
let default_satisfaction_fee = context | ||
.upper_bound_fee_rate | ||
.fee_wu(Weight::from_wu(if context.taproot_wallet { | ||
P2TR_INPUT_WEIGHT | ||
} else { | ||
P2WPKH_INPUT_WEIGHT | ||
})) | ||
.unwrap_or(Amount::MAX); | ||
let reserve_per_channel = get_reserve_per_channel(context) - default_satisfaction_fee; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should just remove the default wallet input weight from anchor_output_spend_transaction_weight
, so we don't have to subtract it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactored this to take the weight of the initial inputs as an argument, and made a note that get_reserve_per_channel
only includes fees to spend a single UTXO of the default type.
a2604f7
to
278b847
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! This is great, didn't review the constants yet.
htlc_success_transaction_weight(context) * num_htlcs + | ||
htlc_timeout_transaction_weight(context) * num_htlcs, | ||
); | ||
context.upper_bound_fee_rate.fee_wu(weight).ok_or(AnchorChannelReserveError::FeeOverflow) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it really worth returning an error here? The overflow indicates we can only have 0 channels, which would be equivalent to a satruating_multiply here, or just returning 21M BTC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The errors were added in #3487 (comment), but I do agree that saturation is well-defined here. Reverted the commit that introduced the errors for now.
// As an upper bound, it is assumed that each HTLC is resolved in a separate transaction. | ||
// However, they might be aggregated when possible depending on timelocks and expiries. | ||
htlc_success_transaction_weight(context) * num_htlcs + | ||
htlc_timeout_transaction_weight(context) * num_htlcs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's....much simpler than I was thinking this was gonna be.
/// See [ChannelHandshakeConfig::our_max_accepted_htlcs] to restrict accepted in-flight HTLCs. | ||
/// | ||
/// [ChannelHandshakeConfig::our_max_accepted_htlcs]: crate::util::config::ChannelHandshakeConfig::our_max_accepted_htlcs | ||
pub expected_accepted_htlcs: u16, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just look this up for the user? We can pull the default value out of the ChannelManager
's default config, and we can (and probably should) look at existing channels for current values (as they may differ from channel to channel).
Also we may be setting different values from our peers, implying we may send more outbound HTLCs than accept inbound HTLCs (tho in that case we're just talking about outbound payments, maybe we don't care as much about claiming those quickly?)
There's also the post-trampoline question, where a single inbound HTLC might turn into multiple outbound HTLCs, but we can leave that for post-trampoline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just look this up for the user? We can pull the default value out of the ChannelManager's default config, and we can (and probably should) look at existing channels for current values (as they may differ from channel to channel).
The value of our_max_accepted_htlcs
will be the maximum pending accepted HTLCs for each channel, but it might be an aggressive worst-case estimate in practice. Reserve estimates based on a value of 483 for example will be pretty high. A user who configures a high value of our_max_accepted_htlcs
will probably want to look at the number of pending accepted HTLCs in practice and combine it probabilistically with the number for malicious saturation, hence expected_accepted_htlcs
.
I think we can do the look up by default and take an override though. My only hesitation was that it might be difficult to explain to the user what reserve is needed if we want to look at each existing and new channel individually, and if the reserve needed for each could be a different-sized UTXO that might change over time. The parameter does leave it more up to the user though. What do you think?
Also we may be setting different values from our peers, implying we may send more outbound HTLCs than accept inbound HTLCs (tho in that case we're just talking about outbound payments, maybe we don't care as much about claiming those quickly?)
Yeah I think that accepted HTLCs and corresponding forwards are the main ones we need to confirm quickly. Any resolution for an outbound payment would be fine I believe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I think this is fair, but at the same time its important to recognize that an attacker can flood HTLCs prior to close specifically to run over this limit. I suppose if we want to be particularly clear we should have a "percent of channels controlled by an attacker" knob, where all those channels get the max HTLC count and the rest of the channels don't count at all...
In the mean time, it'd be nice to be a bit more pessimistic with the docs here, at least a note that an attacker my flood us with HTLCs prior to closing a channel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a note about attackers saturating channels with the maximum number of HTLCs, and how this can potentially be included in expected_accepted_htlcs
as a weighted average for some percentage of channels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TheBlueMatt In a future with custom commitment transactions, this module will likely need an interface for the satisfaction weights of the anchor and HTLC outputs.
// - 1 byte for witness items count | ||
// - 1 byte for the signature length | ||
// - 64 bytes for the Schnorr signature | ||
const P2TR_INPUT_WEIGHT: u64 = (36 + 4 + 1) * WITNESS_SCALE_FACTOR as u64 + (1 + 1 + 64); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: P2TR_KEYPATH_INPUT_WEIGHT
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah thanks! Renamed.
Yea, good point. I assume we'll be able to use the existing signer interface to calculate that, but we can deal with it when we get there. |
278b847
to
d280ab9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM pending squash.
I think the code will be quite useful as-is, and any further improvements can be done in future PRs.
/// - If each transaction takes N blocks to confirm, at least N UTXOs are needed to provide the | ||
/// necessary concurrency. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// - If each transaction takes N blocks to confirm, at least N UTXOs are needed to provide the | |
/// necessary concurrency. | |
/// - If each transaction takes N blocks to confirm, at least N UTXOs per channel are needed to provide the | |
/// necessary concurrency. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
// We require disjoint sets of UTXOs for the reserve of each channel, | ||
// as claims are only aggregated per channel currently. | ||
// | ||
// UTXOs larger than the required reserve are a singleton disjoint set. | ||
// A disjoint set of fractional UTXOs could overcontribute by any amount less than the | ||
// required reserve, approaching double the reserve. | ||
// | ||
// Note that for the fractional UTXOs, this is an approximation as we can't efficiently calculate | ||
// a worst-case coin selection as an NP-complete problem. | ||
num_whole_utxos + total_fractional_amount.to_sat() / reserve_per_channel.to_sat() / 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do that then I think we can end up violating the CPFP carve out, which increases the pinning attack surface.
In what way? I believe the only unconfirmed input to the anchor output spend transaction would be the anchor output, as LDK will only select confirmed UTXOs to provide funds for the relevant transactions. We would split larger confirmed UTXOs in the anchor output spend transaction depending on the number of in-flight HTLCs. The split UTXOs can then be used once confirmed, every non-anchor output on the commitment transaction should have a CSV lock anyway.
Ah, I misunderstood what you were suggesting. We don't want to split UTXOs before the anchor spend (and therefore have another unconfirmed ancestor), but it should be OK to split them in the anchor spend itself. That's a neat idea that may be worth implementing in the future.
I was thinking a bit more about the verification of the UTXO shape here for the future. The algorithm to find the value that can be provided with the required concurrency guarantees is interesting.
...
It won't be too difficult to verify, but it might be complicated to explain to the users. Or we would just require uniform UTXOs.
It will be more documentation, but I don't think it will be too hard to explain the requirements. The user just needs to understand that the reserve needs to be partitioned into 5 disjoint sets and that the smallest of the sets is the limiting factor on number of channels.
It might make sense to suggest uniform UTXOs in the future get_required_reserve
documentation, but I don't think we should impose a uniform requirement on the get_supportable_anchor_channels
side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with one more nit on docs, agree please feel free to squash and let's land this!
/// See [ChannelHandshakeConfig::our_max_accepted_htlcs] to restrict accepted in-flight HTLCs. | ||
/// | ||
/// [ChannelHandshakeConfig::our_max_accepted_htlcs]: crate::util::config::ChannelHandshakeConfig::our_max_accepted_htlcs | ||
pub expected_accepted_htlcs: u16, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I think this is fair, but at the same time its important to recognize that an attacker can flood HTLCs prior to close specifically to run over this limit. I suppose if we want to be particularly clear we should have a "percent of channels controlled by an attacker" knob, where all those channels get the max HTLC count and the rest of the channels don't count at all...
In the mean time, it'd be nice to be a bit more pessimistic with the docs here, at least a note that an attacker my flood us with HTLCs prior to closing a channel.
d280ab9
to
4e047b3
Compare
Thanks! Squashed. |
This change defines anchor reserve requirements by calculating weights and fees for the transactions that need to be confirmed on-chain in the event of a unilateral closure. The calculation is given a set of parameters as input, including the expected fee rate and number of in-flight HTLCs.
4e047b3
to
7354ebe
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Gonna go ahead and land this. @wpaulino can take another look post-merge if he wants. |
This change defines anchor reserve requirements by calculating weights and fees for the transactions that need to be confirmed on-chain in the event of a unilateral closure. The calculation is given a set of parameters as input, including the expected fee rate and number of in-flight HTLCs.