Skip to content

BOLT5: fix claim backward revoked htlc success #322

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

ariard
Copy link

@ariard ariard commented Mar 15, 2019

Seems we forget to implement the case where a revoked HTLC-success is broadcast, providing a preimage to a still valid in backward channel HTLC. We need to extract it, map it to source still available in prev/current_remote_commitment_tx and pass htlc_update upstream.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I believe this was deliberately left out - if its the case it implies we can (and will) claim the HTLC-Transaction once we pass it to check_spend_remote_htlc. While we could double-claim the funds here, guessing which HTLC(s) to claim based on the hash+value in a (semi-)unrelated data datastructure feels super awkward.

@@ -1903,6 +1918,10 @@ impl ChannelMonitor {
// has timed out, or we screwed up. In any case, we should now
// resolve the source HTLC with the original sender.
payment_data = Some(((*source).clone(), htlc_output.payment_hash));
} else if let Storage::Local { ref current_remote_commitment_txid, .. } = self.key_storage {
check_htlc_valid_remote!(current_remote_commitment_txid, htlc_output);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move these checks into the if Some(...) = self.remote_claimable_outpoints.get(&input.previous_output.txid) conditional below? They don't make any sense in the spends-local-commitment-transaction branch.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kept it where it is but used $local_tx to avoid iterating twice on htlc_outputs

@ariard ariard force-pushed the 2019-03-fix-claim-backward-revoked-htlc-success branch from 8a26811 to 17b8e76 Compare March 23, 2019 00:44
@ariard
Copy link
Author

ariard commented Mar 23, 2019

There we will claim the revoked HTLC-success tx via check_spend_remote_htlc, but we also have to fulfill the BOLT 5 requirement to extract preimage for a still-valid HTLC in backward channel if don't have yet this one. Which are the funds we risk to double-claim ?

Agree that's a bit hacky, but other way was to avoid pruning of offered HTLC outputs sources from remote_claimable_outpoints at provide_secret and keeping them forever. Seems to be a tradeoff between hash collision (but at least we claim something backward) and storage.

@TheBlueMatt
Copy link
Collaborator

TheBlueMatt commented Mar 23, 2019 via email

@ariard
Copy link
Author

ariard commented Mar 23, 2019

Hmmm I agree with the BOLT there, why fail back HTLC on our forwarded-from counterparty whereas you get the opportunity to succeed the payment ? On a general network view, we just improve the delivery rate of payment, avoid burden for origin node to recompute route and processing nodes to deal with new onions.

IMO strictly speaking it's isn't double claim of same funds because they result from 2 different funding_outpoint ?

Will split and do a big comment, explaining the tradeoff, and I think taking the could-be "wrong" source way is better than eternal temporary storage (the first is highly-unlikely, the second always certain).

Edit: we may prune offered HTLC sources after expiration or fulfillment, but it may add a reasonable amount of processing for watchtower to do so

@TheBlueMatt
Copy link
Collaborator

I guess thinking about this more because the check is "same hash and value" its no different from any other claim - we cannot identify the sender is "the correct one and not someone trying to deanonymize a payment" if the value and hash are the same. My point about the BOLT was that its dumb for the BOLT to tell us we MUST do this, though recommending we MAY do this would seem acceptable, it should be up to the client how much they want to double-claim funds related to the same payment.

@ariard ariard force-pushed the 2019-03-fix-claim-backward-revoked-htlc-success branch 2 times, most recently from 50ea516 to eaab840 Compare March 25, 2019 18:22
@ariard ariard force-pushed the 2019-03-fix-claim-backward-revoked-htlc-success branch from eaab840 to ef8cc4d Compare March 25, 2019 18:49
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will take the first commit as #325, dunno about the test, though.

header = BlockHeader { version: 0x20000000, prev_blockhash: header.bitcoin_hash(), merkle_root: Default::default(), time: 42, bits: 42, nonce: 42 };
nodes[0].chain_monitor.block_connected_with_filtering(&Block { header, txdata: vec![node_txn[1].clone()] }, 1);
test_revoked_htlc_claim_txn_broadcast(&nodes[0], node_txn[1].clone());
nodes[1].chain_monitor.block_connected_with_filtering(&Block { header, txdata: vec![node_txn[1].clone(), node_txn[2].clone()] }, 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ugh, I really hate this test - at this point nodes[1] has a pending fail_backwards of the HTLC and its just waiting on a future process_pending_htlc_forwards() call. We relay on claims bypassing the pending_forwards stuff to skip the fail and actually fulfill. At some point I'd kinda like to batch claims in addition to fails, which would break this test. I'd kinda prefer we just leave the test change until #305 lands.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got your point, agree to wait on this

@@ -1903,6 +1919,13 @@ impl ChannelMonitor {
// has timed out, or we screwed up. In any case, we should now
// resolve the source HTLC with the original sender.
payment_data = Some(((*source).clone(), htlc_output.payment_hash));
} else if !$local_tx {
log_claim!($tx_info, $local_tx, htlc_output, false);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think you only want to do this in the payment_data case.

if let Storage::Local { ref current_remote_commitment_txid, .. } = self.key_storage {
check_htlc_valid_remote!(current_remote_commitment_txid, htlc_output);
} else if let Storage::Local { ref prev_remote_commitment_txid, .. } = self.key_storage {
check_htlc_valid_remote!(prev_remote_commitment_txid, htlc_output);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This if block can never be hit (and should only run if the previous block failed to set payment_data).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, in fact even fail to build a test to this, but wasn't 100% sure it can't be hit

@@ -1903,6 +1919,13 @@ impl ChannelMonitor {
// has timed out, or we screwed up. In any case, we should now
// resolve the source HTLC with the original sender.
payment_data = Some(((*source).clone(), htlc_output.payment_hash));
} else if !$local_tx {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that we still want to hit the last else even if !$local_tx as long as payment_data is still none.

@ariard
Copy link
Author

ariard commented Aug 2, 2019

Anyway, now tracking has landed, will rewrite a test on this point as part of BOLT 5 review

@ariard ariard closed this Aug 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants