-
Notifications
You must be signed in to change notification settings - Fork 407
option_data_loss_protect-on-channel_reestablishment #244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
option_data_loss_protect-on-channel_reestablishment #244
Conversation
adresses #238 |
Hmm I'm a little bit skeptical of data_loss_protect has it's laid on other peer benevolence, I mean how do we manage "MUST NOT broadcast its commitment transaction and SHOULD fail the channel" ? To answer to your IRC question, compressed pubkey are 33-bytes length. |
The spec is very vague on it. Best info I found was looking at the LND and Elements implementation's source code. As far as I have it the idea behind it is that it will protect you from publishing old commitment transactions to the chain so you wont loose the funds in the channel. We should only not publish when we get a valid "your_last_per_commitment_secret" that is newer than the one we have. As the size, yeah I realised it this morning. |
I looked at channel error needs to be changed probably as well for that return. |
A chunk of the reason for data_loss_protect is so that you at least have the per_commitment_point and can claim your to_remote_output when the remote side broadcasts the latest commitment transaction, which we need, but its unclear where things are going with the 1.1 spec, which we may end up only supporting anyway. That's not to say we shouldn't support this - ensuring we can claim to_remote_output now is pretty important. |
So this also (obviously, as you point out) needs a mechanism to indicate "fail channel, but don't broadcast latest local tx" in ChannelError and ChannelManager needs to keep enough info around to broadcast local txn after some time has passed (assuming the remote doesn't broadcast after we send them an error message). Also, we need to be able to pass the new per_commitment_point to the ChannelMonitor so that we can reconstruct the to_remote_output private key if they broadcast their latest commitment transaction. Finally, needs to set the local_flags bit in our init message. |
Note that the bit about ChannelMonitor will largely go away with oprion_simplified_commitment, see https://github.com/lightningnetwork/lightning-rfc/wiki/Lightning-Specification-1.1-Proposal-States
…On November 14, 2018 12:14:06 AM UTC, Matt Corallo ***@***.***> wrote:
So this also (obviously, as you point out) needs a mechanism to
indicate "fail channel, but don't broadcast latest local tx" in
ChannelError and ChannelManager needs to keep enough info around to
broadcast local txn after some time has passed (assuming the remote
doesn't broadcast after we send them an error message). Also, we need
to be able to pass the new per_commitment_point to the ChannelMonitor
so that we can reconstruct the to_remote_output private key if they
broadcast their latest commitment transaction. Finally, needs to set
the local_flags bit in our init message.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#244 (comment)
|
When is 1.1 going live? |
When everyone implements it :p. I presume it'll be soon enough that we can just rely on it as we're still a ways from being considered stable enough to put real funds in (and it means we have less fee crap to implement =D).
…On November 14, 2018 6:10:10 AM UTC, SW van Heerden ***@***.***> wrote:
When is 1.1 going live?
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#244 (comment)
|
Note that 1.1 will be a series of feature bits, not a wholesale protocol rev.
…On November 14, 2018 6:10:10 AM UTC, SW van Heerden ***@***.***> wrote:
When is 1.1 going live?
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#244 (comment)
|
dcd5d1c
to
240384c
Compare
c52644e
to
a468c35
Compare
3f15958
to
efee362
Compare
src/ln/channelmanager.rs
Outdated
@@ -386,7 +395,7 @@ macro_rules! handle_error { | |||
Ok(msg) => Ok(msg), | |||
Err(MsgHandleErrInternal { err, shutdown_finish }) => { | |||
if let Some((shutdown_res, update_option)) = shutdown_finish { | |||
$self.finish_force_close_channel(shutdown_res); | |||
$self.finish_force_close_channel(shutdown_res,true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not put the flag in ShutdownResult so you dont have to keep track of it everywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will push a version that is not like it. But that version will require a send a bool back from channel.force_shutdown() or I have to send it in like it was up onto now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another option is perhaps is till remove the new error type, and we only save a flag in the channel indicating whenever that error occurred or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely needs tests, but looking good.
e03a27b
to
95d485f
Compare
95d485f
to
f4995cf
Compare
@@ -83,12 +83,12 @@ impl Drop for Node { | |||
} | |||
} | |||
|
|||
fn create_chan_between_nodes(node_a: &Node, node_b: &Node) -> (msgs::ChannelAnnouncement, msgs::ChannelUpdate, msgs::ChannelUpdate, [u8; 32], Transaction) { | |||
create_chan_between_nodes_with_value(node_a, node_b, 100000, 10001) | |||
fn create_chan_between_nodes(node_a: &Node, node_b: &Node, local_features: &Option<msgs::LocalFeatures>) -> (msgs::ChannelAnnouncement, msgs::ChannelUpdate, msgs::ChannelUpdate, [u8; 32], Transaction) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be easier diff/rebase-wise if you just create a new intermediary wrapper function and leave the existing functions as-is. Just create a create_chan_between_nodes_with_features and make the other functions call it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean like rename the original, and let the original calls go towards towards a wrapper that puts in None?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, that should reduce the diff and make rebase much easier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I stopped doping this halfway, its going to make the code a bit of a mess according to me. We are going to add about 10 extra duplicate functions.
I would rather take on the complicated task of rebasing then doing that.
482c941
to
2c9c81f
Compare
Hey, sorry for the delay in taking this back up. This is looking pretty good, though I dont think we want to track our peer's feature flags in Channel - if the peer upgrades we may want to start using a different set of features (at least wrt data_loss_protect) than we started with. I think you should be able to just pass a &LocalFeatureFlags into the relevant handle_* messages. |
Closing since #349 got merged. |
No description provided.