Skip to content

Create SpendableOutputs events no matter the chain::Confirm order #970

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
68 changes: 44 additions & 24 deletions lightning/src/chain/channelmonitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -370,8 +370,8 @@ impl OnchainEventEntry {
conf_threshold
}

fn has_reached_confirmation_threshold(&self, height: u32) -> bool {
height >= self.confirmation_threshold()
fn has_reached_confirmation_threshold(&self, best_block: &BestBlock) -> bool {
best_block.height() >= self.confirmation_threshold()
}
}

Expand Down Expand Up @@ -1331,7 +1331,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
macro_rules! claim_htlcs {
($commitment_number: expr, $txid: expr) => {
let htlc_claim_reqs = self.get_counterparty_htlc_output_claim_reqs($commitment_number, $txid, None);
self.onchain_tx_handler.update_claims_view(&Vec::new(), htlc_claim_reqs, self.best_block.height(), broadcaster, fee_estimator, logger);
self.onchain_tx_handler.update_claims_view(&Vec::new(), htlc_claim_reqs, self.best_block.height(), self.best_block.height(), broadcaster, fee_estimator, logger);
}
}
if let Some(txid) = self.current_counterparty_commitment_txid {
Expand All @@ -1353,11 +1353,14 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
// *we* sign a holder commitment transaction, not when e.g. a watchtower broadcasts one of our
// holder commitment transactions.
if self.broadcasted_holder_revokable_script.is_some() {
let (claim_reqs, _) = self.get_broadcasted_holder_claims(&self.current_holder_commitment_tx, 0);
self.onchain_tx_handler.update_claims_view(&Vec::new(), claim_reqs, self.best_block.height(), broadcaster, fee_estimator, logger);
// Assume that the broadcasted commitment transaction confirmed in the current best
// block. Even if not, its a reasonable metric for the bump criteria on the HTLC
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this comment is a bit confusing, bumping timer is only function of current height and expiration timelock (see get_height_timer in package.rs, not of parent transaction conf. So leveraging the best known height as comparison point to refresh timers is the best we can do ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_broadcasted_holder_claims sets the PackageTemplate::soonest_conf_deadline to the height given here, so it is the "expiration timelock".

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes in this case parent conf height is equal to PackageTemplate::soonest_conf_deadline which is equal to "expiration timelock" but my point rather than relying on parent conf height, which have the side-effect of artificially increasing the fee-bump frequency and thus bleed feerate a bit, we should rely on the per-holder-claims timelocks themselves to fulfill soonest_conf_deadline.

Though I don't think this comment needs further action for now and sounds to be related just after.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still a bit confused here, but I'll leave it for a followup.

// transactions.
let (claim_reqs, _) = self.get_broadcasted_holder_claims(&self.current_holder_commitment_tx, self.best_block.height());
self.onchain_tx_handler.update_claims_view(&Vec::new(), claim_reqs, self.best_block.height(), self.best_block.height(), broadcaster, fee_estimator, logger);
if let Some(ref tx) = self.prev_holder_signed_commitment_tx {
let (claim_reqs, _) = self.get_broadcasted_holder_claims(&tx, 0);
self.onchain_tx_handler.update_claims_view(&Vec::new(), claim_reqs, self.best_block.height(), broadcaster, fee_estimator, logger);
let (claim_reqs, _) = self.get_broadcasted_holder_claims(&tx, self.best_block.height());
self.onchain_tx_handler.update_claims_view(&Vec::new(), claim_reqs, self.best_block.height(), self.best_block.height(), broadcaster, fee_estimator, logger);
}
}
}
Expand Down Expand Up @@ -1724,7 +1727,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
// Returns (1) `PackageTemplate`s that can be given to the OnChainTxHandler, so that the handler can
// broadcast transactions claiming holder HTLC commitment outputs and (2) a holder revokable
// script so we can detect whether a holder transaction has been seen on-chain.
fn get_broadcasted_holder_claims(&self, holder_tx: &HolderSignedTx, height: u32) -> (Vec<PackageTemplate>, Option<(Script, PublicKey, PublicKey)>) {
fn get_broadcasted_holder_claims(&self, holder_tx: &HolderSignedTx, conf_height: u32) -> (Vec<PackageTemplate>, Option<(Script, PublicKey, PublicKey)>) {
let mut claim_requests = Vec::with_capacity(holder_tx.htlc_outputs.len());

let redeemscript = chan_utils::get_revokeable_redeemscript(&holder_tx.revocation_key, self.on_holder_tx_csv, &holder_tx.delayed_payment_key);
Expand All @@ -1743,7 +1746,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
};
HolderHTLCOutput::build_accepted(payment_preimage, htlc.amount_msat)
};
let htlc_package = PackageTemplate::build_package(holder_tx.txid, transaction_output_index, PackageSolvingData::HolderHTLCOutput(htlc_output), height, false, height);
let htlc_package = PackageTemplate::build_package(holder_tx.txid, transaction_output_index, PackageSolvingData::HolderHTLCOutput(htlc_output), htlc.cltv_expiry, false, conf_height);
claim_requests.push(htlc_package);
}
}
Expand Down Expand Up @@ -1856,7 +1859,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
} else if htlc.0.cltv_expiry > self.best_block.height() + 1 {
// Don't broadcast HTLC-Timeout transactions immediately as they don't meet the
// current locktime requirements on-chain. We will broadcast them in
// `block_confirmed` when `would_broadcast_at_height` returns true.
// `block_confirmed` when `should_broadcast_holder_commitment_txn` returns true.
// Note that we add + 1 as transactions are broadcastable when they can be
// confirmed in the next block.
continue;
Expand Down Expand Up @@ -1926,13 +1929,13 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {

if height > self.best_block.height() {
self.best_block = BestBlock::new(block_hash, height);
self.block_confirmed(height, vec![], vec![], vec![], broadcaster, fee_estimator, logger)
} else {
self.block_confirmed(height, vec![], vec![], vec![], &broadcaster, &fee_estimator, &logger)
} else if block_hash != self.best_block.block_hash() {
self.best_block = BestBlock::new(block_hash, height);
self.onchain_events_awaiting_threshold_conf.retain(|ref entry| entry.height <= height);
self.onchain_tx_handler.block_disconnected(height + 1, broadcaster, fee_estimator, logger);
Vec::new()
}
} else { Vec::new() }
}

fn transactions_confirmed<B: Deref, F: Deref, L: Deref>(
Expand Down Expand Up @@ -2004,33 +2007,49 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
self.is_paying_spendable_output(&tx, height, &logger);
}

self.block_confirmed(height, txn_matched, watch_outputs, claimable_outpoints, broadcaster, fee_estimator, logger)
if height > self.best_block.height() {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you can add message commit here : "This avoids complexity in block_confirmed by never having a tracked height set which is higher than our current best confirmed chain, potentially avoiding some bugs in the rather-complicated code" ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put it in the doc comment of block_confirmed instead.

self.best_block = BestBlock::new(block_hash, height);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we log_trace here as we do in block_connected and best_block_updated? Regardless of which methods, should we log only when best_block is updated or (in particularly for the other two) unconditionally when the methods are called?

Taking a step back, it feels like some of this logging me be redundant across other monitors and may not provide much value. Maybe the logging should be placed in ChainMonitor instead? Then again, logging when the best block is actually updated could be valuable.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I think there's a lot of room for improvement here, lets do it in a followup?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

}

self.block_confirmed(height, txn_matched, watch_outputs, claimable_outpoints, &broadcaster, &fee_estimator, &logger)
}

/// Update state for new block(s)/transaction(s) confirmed. Note that the caller must update
/// `self.best_block` before calling if a new best blockchain tip is available. More
/// concretely, `self.best_block` must never be at a lower height than `conf_height`, avoiding
/// complexity especially in `OnchainTx::update_claims_view`.
///
/// `conf_height` should be set to the height at which any new transaction(s)/block(s) were
/// confirmed at, even if it is not the current best height.
fn block_confirmed<B: Deref, F: Deref, L: Deref>(
&mut self,
height: u32,
conf_height: u32,
txn_matched: Vec<&Transaction>,
mut watch_outputs: Vec<TransactionOutputs>,
mut claimable_outpoints: Vec<PackageTemplate>,
broadcaster: B,
fee_estimator: F,
logger: L,
broadcaster: &B,
fee_estimator: &F,
logger: &L,
) -> Vec<TransactionOutputs>
where
B::Target: BroadcasterInterface,
F::Target: FeeEstimator,
L::Target: Logger,
{
let should_broadcast = self.would_broadcast_at_height(height, &logger);
debug_assert!(self.best_block.height() >= conf_height);

let should_broadcast = self.should_broadcast_holder_commitment_txn(logger);
if should_broadcast {
let funding_outp = HolderFundingOutput::build(self.funding_redeemscript.clone());
let commitment_package = PackageTemplate::build_package(self.funding_info.0.txid.clone(), self.funding_info.0.index as u32, PackageSolvingData::HolderFundingOutput(funding_outp), height, false, height);
let commitment_package = PackageTemplate::build_package(self.funding_info.0.txid.clone(), self.funding_info.0.index as u32, PackageSolvingData::HolderFundingOutput(funding_outp), self.best_block.height(), false, self.best_block.height());
claimable_outpoints.push(commitment_package);
self.pending_monitor_events.push(MonitorEvent::CommitmentTxBroadcasted(self.funding_info.0));
let commitment_tx = self.onchain_tx_handler.get_fully_signed_holder_tx(&self.funding_redeemscript);
self.holder_tx_signed = true;
let (mut new_outpoints, _) = self.get_broadcasted_holder_claims(&self.current_holder_commitment_tx, height);
// Because we're broadcasting a commitment transaction, we should construct the package
// assuming it gets confirmed in the next block. Sadly, we have code which considers
// "not yet confirmed" things as discardable, so we cannot do that here.
let (mut new_outpoints, _) = self.get_broadcasted_holder_claims(&self.current_holder_commitment_tx, self.best_block.height());
let new_outputs = self.get_broadcasted_holder_watch_outputs(&self.current_holder_commitment_tx, &commitment_tx);
if !new_outputs.is_empty() {
watch_outputs.push((self.current_holder_commitment_tx.txid.clone(), new_outputs));
Expand All @@ -2043,7 +2062,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
self.onchain_events_awaiting_threshold_conf.drain(..).collect::<Vec<_>>();
let mut onchain_events_reaching_threshold_conf = Vec::new();
for entry in onchain_events_awaiting_threshold_conf {
if entry.has_reached_confirmation_threshold(height) {
if entry.has_reached_confirmation_threshold(&self.best_block) {
onchain_events_reaching_threshold_conf.push(entry);
} else {
self.onchain_events_awaiting_threshold_conf.push(entry);
Expand Down Expand Up @@ -2098,7 +2117,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
}
}

self.onchain_tx_handler.update_claims_view(&txn_matched, claimable_outpoints, height, &&*broadcaster, &&*fee_estimator, &&*logger);
self.onchain_tx_handler.update_claims_view(&txn_matched, claimable_outpoints, conf_height, self.best_block.height(), broadcaster, fee_estimator, logger);

// Determine new outputs to watch by comparing against previously known outputs to watch,
// updating the latter in the process.
Expand Down Expand Up @@ -2200,7 +2219,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
false
}

fn would_broadcast_at_height<L: Deref>(&self, height: u32, logger: &L) -> bool where L::Target: Logger {
fn should_broadcast_holder_commitment_txn<L: Deref>(&self, logger: &L) -> bool where L::Target: Logger {
// We need to consider all HTLCs which are:
// * in any unrevoked counterparty commitment transaction, as they could broadcast said
// transactions and we'd end up in a race, or
Expand All @@ -2211,6 +2230,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
// to the source, and if we don't fail the channel we will have to ensure that the next
// updates that peer sends us are update_fails, failing the channel if not. It's probably
// easier to just fail the channel as this case should be rare enough anyway.
let height = self.best_block.height();
macro_rules! scan_commitment {
($htlcs: expr, $holder_tx: expr) => {
for ref htlc in $htlcs {
Expand Down
37 changes: 20 additions & 17 deletions lightning/src/chain/onchaintx.rs
Original file line number Diff line number Diff line change
Expand Up @@ -343,15 +343,15 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
/// (CSV or CLTV following cases). In case of high-fee spikes, claim tx may stuck in the mempool, so you need to bump its feerate quickly using Replace-By-Fee or Child-Pay-For-Parent.
/// Panics if there are signing errors, because signing operations in reaction to on-chain events
/// are not expected to fail, and if they do, we may lose funds.
fn generate_claim_tx<F: Deref, L: Deref>(&mut self, height: u32, cached_request: &PackageTemplate, fee_estimator: &F, logger: &L) -> Option<(Option<u32>, u64, Transaction)>
fn generate_claim_tx<F: Deref, L: Deref>(&mut self, cur_height: u32, cached_request: &PackageTemplate, fee_estimator: &F, logger: &L) -> Option<(Option<u32>, u64, Transaction)>
where F::Target: FeeEstimator,
L::Target: Logger,
{
if cached_request.outpoints().len() == 0 { return None } // But don't prune pending claiming request yet, we may have to resurrect HTLCs

// Compute new height timer to decide when we need to regenerate a new bumped version of the claim tx (if we
// didn't receive confirmation of it before, or not enough reorg-safe depth on top of it).
let new_timer = Some(cached_request.get_height_timer(height));
let new_timer = Some(cached_request.get_height_timer(cur_height));
if cached_request.is_malleable() {
let predicted_weight = cached_request.package_weight(&self.destination_script);
if let Some((output_value, new_feerate)) = cached_request.compute_package_output(predicted_weight, fee_estimator, logger) {
Expand All @@ -377,12 +377,15 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
/// for this channel, provide new relevant on-chain transactions and/or new claim requests.
/// Formerly this was named `block_connected`, but it is now also used for claiming an HTLC output
/// if we receive a preimage after force-close.
pub(crate) fn update_claims_view<B: Deref, F: Deref, L: Deref>(&mut self, txn_matched: &[&Transaction], requests: Vec<PackageTemplate>, height: u32, broadcaster: &B, fee_estimator: &F, logger: &L)
/// `conf_height` represents the height at which the transactions in `txn_matched` were
/// confirmed. This does not need to equal the current blockchain tip height, which should be
/// provided via `cur_height`, however it must never be higher than `cur_height`.
pub(crate) fn update_claims_view<B: Deref, F: Deref, L: Deref>(&mut self, txn_matched: &[&Transaction], requests: Vec<PackageTemplate>, conf_height: u32, cur_height: u32, broadcaster: &B, fee_estimator: &F, logger: &L)
where B::Target: BroadcasterInterface,
F::Target: FeeEstimator,
L::Target: Logger,
{
log_debug!(logger, "Updating claims view at height {} with {} matched transactions and {} claim requests", height, txn_matched.len(), requests.len());
log_debug!(logger, "Updating claims view at height {} with {} matched transactions in block {} and {} claim requests", cur_height, txn_matched.len(), conf_height, requests.len());
let mut preprocessed_requests = Vec::with_capacity(requests.len());
let mut aggregated_request = None;

Expand All @@ -401,17 +404,17 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
continue;
}

if req.package_timelock() > height + 1 {
log_info!(logger, "Delaying claim of package until its timelock at {} (current height {}), the following outpoints are spent:", req.package_timelock(), height);
if req.package_timelock() > cur_height + 1 {
log_info!(logger, "Delaying claim of package until its timelock at {} (current height {}), the following outpoints are spent:", req.package_timelock(), cur_height);
for outpoint in req.outpoints() {
log_info!(logger, " Outpoint {}", outpoint);
}
self.locktimed_packages.entry(req.package_timelock()).or_insert(Vec::new()).push(req);
continue;
}

log_trace!(logger, "Test if outpoint can be aggregated with expiration {} against {}", req.timelock(), height + CLTV_SHARED_CLAIM_BUFFER);
if req.timelock() <= height + CLTV_SHARED_CLAIM_BUFFER || !req.aggregable() {
log_trace!(logger, "Test if outpoint can be aggregated with expiration {} against {}", req.timelock(), cur_height + CLTV_SHARED_CLAIM_BUFFER);
if req.timelock() <= cur_height + CLTV_SHARED_CLAIM_BUFFER || !req.aggregable() {
// Don't aggregate if outpoint package timelock is soon or marked as non-aggregable
preprocessed_requests.push(req);
} else if aggregated_request.is_none() {
Expand All @@ -425,8 +428,8 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
preprocessed_requests.push(req);
}

// Claim everything up to and including height + 1
let remaining_locked_packages = self.locktimed_packages.split_off(&(height + 2));
// Claim everything up to and including cur_height + 1
let remaining_locked_packages = self.locktimed_packages.split_off(&(cur_height + 2));
for (pop_height, mut entry) in self.locktimed_packages.iter_mut() {
log_trace!(logger, "Restoring delayed claim of package(s) at their timelock at {}.", pop_height);
preprocessed_requests.append(&mut entry);
Expand All @@ -436,13 +439,13 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
// Generate claim transactions and track them to bump if necessary at
// height timer expiration (i.e in how many blocks we're going to take action).
for mut req in preprocessed_requests {
if let Some((new_timer, new_feerate, tx)) = self.generate_claim_tx(height, &req, &*fee_estimator, &*logger) {
if let Some((new_timer, new_feerate, tx)) = self.generate_claim_tx(cur_height, &req, &*fee_estimator, &*logger) {
req.set_timer(new_timer);
req.set_feerate(new_feerate);
let txid = tx.txid();
for k in req.outpoints() {
log_info!(logger, "Registering claiming request for {}:{}", k.txid, k.vout);
self.claimable_outpoints.insert(k.clone(), (txid, height));
self.claimable_outpoints.insert(k.clone(), (txid, conf_height));
}
self.pending_claim_requests.insert(txid, req);
log_info!(logger, "Broadcasting onchain {}", log_tx!(tx));
Expand Down Expand Up @@ -476,7 +479,7 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
() => {
let entry = OnchainEventEntry {
txid: tx.txid(),
height,
height: conf_height,
event: OnchainEvent::Claim { claim_request: first_claim_txid_height.0.clone() }
};
if !self.onchain_events_awaiting_threshold_conf.contains(&entry) {
Expand Down Expand Up @@ -516,7 +519,7 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
for package in claimed_outputs_material.drain(..) {
let entry = OnchainEventEntry {
txid: tx.txid(),
height,
height: conf_height,
event: OnchainEvent::ContentiousOutpoint { package },
};
if !self.onchain_events_awaiting_threshold_conf.contains(&entry) {
Expand All @@ -529,7 +532,7 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
let onchain_events_awaiting_threshold_conf =
self.onchain_events_awaiting_threshold_conf.drain(..).collect::<Vec<_>>();
for entry in onchain_events_awaiting_threshold_conf {
if entry.has_reached_confirmation_threshold(height) {
if entry.has_reached_confirmation_threshold(cur_height) {
match entry.event {
OnchainEvent::Claim { claim_request } => {
// We may remove a whole set of claim outpoints here, as these one may have
Expand All @@ -555,7 +558,7 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
// Check if any pending claim request must be rescheduled
for (first_claim_txid, ref request) in self.pending_claim_requests.iter() {
if let Some(h) = request.timer() {
if height >= h {
if cur_height >= h {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this change opening to the possibility of a duplicated broadcast ?

If you call first best_block_updated and bump best_block to N.
Then you call transaction_confirmed and register a request at block N - 3, you generate a first claim transaction.
Then you hit timer returning T >= H, you generate a second claim transaction.

I don't think because we call get_height_timer with cur_height in generate_claim_tx though I would be glad to have this potential interaction more highlighted.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmm, should we swap the ordering of things in update_claims_view and first bump RBF transactions and only process the requests after that? I'm a bit hesitant to given it will probably mean a ton of test changes.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above comment about height collision, I would favor this ordering swap to avoid re-broadcasting. I believe, post-anchor that kind of defaulting internal API issue might force us to reserve fee-bumping reserve for nothing ?

Though it's not related to this PR, and we can keep thinking about it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused about that comment, but I'd think the solution is to track outputs which were on chain and spent and never re-spend them, instead of re-spending them after ANTI_REORG_DELAY.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that would works too to blacklist confirmed utxos so i think it's more an issue on how duplicate block processing is interfering with update_claim_view() list of calls.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, hmm, I guess I just misread your above comment. Let me try the ordering swap in a new PR? I don't really want to hold this/0.0.99 up to redo a ton of tests.

bump_candidates.insert(*first_claim_txid, (*request).clone());
}
}
Expand All @@ -564,7 +567,7 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
// Build, bump and rebroadcast tx accordingly
log_trace!(logger, "Bumping {} candidates", bump_candidates.len());
for (first_claim_txid, request) in bump_candidates.iter() {
if let Some((new_timer, new_feerate, bump_tx)) = self.generate_claim_tx(height, &request, &*fee_estimator, &*logger) {
if let Some((new_timer, new_feerate, bump_tx)) = self.generate_claim_tx(cur_height, &request, &*fee_estimator, &*logger) {
log_info!(logger, "Broadcasting RBF-bumped onchain {}", log_tx!(bump_tx));
broadcaster.broadcast_transaction(&bump_tx);
if let Some(request) = self.pending_claim_requests.get_mut(first_claim_txid) {
Expand Down
4 changes: 2 additions & 2 deletions lightning/src/ln/channel.rs
Original file line number Diff line number Diff line change
Expand Up @@ -114,8 +114,8 @@ enum InboundHTLCState {
/// commitment transaction without it as otherwise we'll have to force-close the channel to
/// claim it before the timeout (obviously doesn't apply to revoked HTLCs that we can't claim
/// anyway). That said, ChannelMonitor does this for us (see
/// ChannelMonitor::would_broadcast_at_height) so we actually remove the HTLC from our own
/// local state before then, once we're sure that the next commitment_signed and
/// ChannelMonitor::should_broadcast_holder_commitment_txn) so we actually remove the HTLC from
/// our own local state before then, once we're sure that the next commitment_signed and
/// ChannelMonitor::provide_latest_local_commitment_tx will not include this HTLC.
LocalRemoved(InboundHTLCRemovalReason),
}
Expand Down
Loading