Skip to content

option_data_loss_protect-on-channel_reestablishment #244

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

SWvheerden
Copy link
Contributor

No description provided.

@SWvheerden SWvheerden changed the title {WIPInitial WIP push [WIP] option_data_loss_protect-on-channel_reestablishment Nov 7, 2018
@SWvheerden
Copy link
Contributor Author

SWvheerden commented Nov 7, 2018

adresses #238

@ariard
Copy link

ariard commented Nov 8, 2018

Hmm I'm a little bit skeptical of data_loss_protect has it's laid on other peer benevolence, I mean how do we manage "MUST NOT broadcast its commitment transaction and SHOULD fail the channel" ?
If other peer is malevolent it can force us to let channel endlessly open right ?

To answer to your IRC question, compressed pubkey are 33-bytes length.

@SWvheerden
Copy link
Contributor Author

The spec is very vague on it. Best info I found was looking at the LND and Elements implementation's source code.

As far as I have it the idea behind it is that it will protect you from publishing old commitment transactions to the chain so you wont loose the funds in the channel. We should only not publish when we get a valid "your_last_per_commitment_secret" that is newer than the one we have.

As the size, yeah I realised it this morning.

@SWvheerden
Copy link
Contributor Author

I looked at channel error needs to be changed probably as well for that return.

@TheBlueMatt
Copy link
Collaborator

TheBlueMatt commented Nov 9, 2018

A chunk of the reason for data_loss_protect is so that you at least have the per_commitment_point and can claim your to_remote_output when the remote side broadcasts the latest commitment transaction, which we need, but its unclear where things are going with the 1.1 spec, which we may end up only supporting anyway.

That's not to say we shouldn't support this - ensuring we can claim to_remote_output now is pretty important.

@TheBlueMatt
Copy link
Collaborator

So this also (obviously, as you point out) needs a mechanism to indicate "fail channel, but don't broadcast latest local tx" in ChannelError and ChannelManager needs to keep enough info around to broadcast local txn after some time has passed (assuming the remote doesn't broadcast after we send them an error message). Also, we need to be able to pass the new per_commitment_point to the ChannelMonitor so that we can reconstruct the to_remote_output private key if they broadcast their latest commitment transaction. Finally, needs to set the local_flags bit in our init message.

@TheBlueMatt
Copy link
Collaborator

TheBlueMatt commented Nov 14, 2018 via email

@SWvheerden
Copy link
Contributor Author

When is 1.1 going live?

@TheBlueMatt
Copy link
Collaborator

TheBlueMatt commented Nov 14, 2018 via email

@TheBlueMatt
Copy link
Collaborator

TheBlueMatt commented Nov 14, 2018 via email

@SWvheerden SWvheerden force-pushed the SW_option_data_loss_protect-on-channel_reestablishment- branch from dcd5d1c to 240384c Compare November 20, 2018 06:25
@SWvheerden SWvheerden force-pushed the SW_option_data_loss_protect-on-channel_reestablishment- branch 2 times, most recently from c52644e to a468c35 Compare November 29, 2018 12:45
@SWvheerden SWvheerden force-pushed the SW_option_data_loss_protect-on-channel_reestablishment- branch 4 times, most recently from 3f15958 to efee362 Compare January 9, 2019 13:18
@@ -386,7 +395,7 @@ macro_rules! handle_error {
Ok(msg) => Ok(msg),
Err(MsgHandleErrInternal { err, shutdown_finish }) => {
if let Some((shutdown_res, update_option)) = shutdown_finish {
$self.finish_force_close_channel(shutdown_res);
$self.finish_force_close_channel(shutdown_res,true);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not put the flag in ShutdownResult so you dont have to keep track of it everywhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will push a version that is not like it. But that version will require a send a bool back from channel.force_shutdown() or I have to send it in like it was up onto now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option is perhaps is till remove the new error type, and we only save a flag in the channel indicating whenever that error occurred or not.

@SWvheerden SWvheerden changed the title [WIP] option_data_loss_protect-on-channel_reestablishment option_data_loss_protect-on-channel_reestablishment Jan 15, 2019
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely needs tests, but looking good.

@SWvheerden SWvheerden force-pushed the SW_option_data_loss_protect-on-channel_reestablishment- branch from e03a27b to 95d485f Compare January 18, 2019 05:47
@SWvheerden SWvheerden force-pushed the SW_option_data_loss_protect-on-channel_reestablishment- branch from 95d485f to f4995cf Compare January 18, 2019 05:49
@@ -83,12 +83,12 @@ impl Drop for Node {
}
}

fn create_chan_between_nodes(node_a: &Node, node_b: &Node) -> (msgs::ChannelAnnouncement, msgs::ChannelUpdate, msgs::ChannelUpdate, [u8; 32], Transaction) {
create_chan_between_nodes_with_value(node_a, node_b, 100000, 10001)
fn create_chan_between_nodes(node_a: &Node, node_b: &Node, local_features: &Option<msgs::LocalFeatures>) -> (msgs::ChannelAnnouncement, msgs::ChannelUpdate, msgs::ChannelUpdate, [u8; 32], Transaction) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be easier diff/rebase-wise if you just create a new intermediary wrapper function and leave the existing functions as-is. Just create a create_chan_between_nodes_with_features and make the other functions call it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean like rename the original, and let the original calls go towards towards a wrapper that puts in None?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, that should reduce the diff and make rebase much easier.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I stopped doping this halfway, its going to make the code a bit of a mess according to me. We are going to add about 10 extra duplicate functions.

I would rather take on the complicated task of rebasing then doing that.

@SWvheerden SWvheerden force-pushed the SW_option_data_loss_protect-on-channel_reestablishment- branch from 482c941 to 2c9c81f Compare January 25, 2019 12:27
@TheBlueMatt
Copy link
Collaborator

Hey, sorry for the delay in taking this back up. This is looking pretty good, though I dont think we want to track our peer's feature flags in Channel - if the peer upgrades we may want to start using a different set of features (at least wrt data_loss_protect) than we started with. I think you should be able to just pass a &LocalFeatureFlags into the relevant handle_* messages.

@TheBlueMatt
Copy link
Collaborator

Closing since #349 got merged.

@TheBlueMatt TheBlueMatt closed this Oct 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants