-
Notifications
You must be signed in to change notification settings - Fork 425
Follow-ups to #4227 (Part 1) #4289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Follow-ups to #4227 (Part 1) #4289
Conversation
|
👋 Hi! I see this is a draft PR. |
|
Going to take another look at this tomorrow before un-drafting it |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #4289 +/- ##
==========================================
- Coverage 89.38% 89.35% -0.03%
==========================================
Files 180 180
Lines 139834 139821 -13
Branches 139834 139821 -13
==========================================
- Hits 124985 124932 -53
- Misses 12262 12309 +47
+ Partials 2587 2580 -7
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
We recently began reconstructing ChannelManager::decode_update_add_htlcs on startup, using data present in the Channels. However, we failed to prune HTLCs from this rebuilt map if a given HTLC was already forwarded to the outbound edge (we pruned correctly if the outbound edge was a closed channel, but not otherwise). Here we fix this bug that would have caused us to double-forward inbound HTLC forwards.
No need to iterate through all entries in the map, we can instead pull out the specific entry that we want.
89f5d07 to
c6bb096
Compare
lightning/src/ln/channelmanager.rs
Outdated
| #[cfg(not(any(test, feature = "_test_utils")))] | ||
| let reconstruct_manager_from_monitors = false; | ||
| #[cfg(any(test, feature = "_test_utils"))] | ||
| let reconstruct_manager_from_monitors = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd be good to make sure we're on the same page for the path forward for a node to eventually switch entirely over to the new persistence scheme for these manager maps --
Would appreciate confirmation of this plan:
Stage 1 (done in this PR, but not for all the desired maps yet): we stop persisting these maps in tests, but always persist them business as usual in prod. This forces the new map reconstruction logic to be used post-restart in tests, but prod still uses the old logic on restart.
Stage 2: once we've merged all the desired reconstruction logic + data and it's all running in tests, let an LDK version or two pass by where we still write all the old fields but support restarting in the case where only the new fields are present
Stage 3: a version or two later, stop writing the old fields and requiring the manager to be persisted regularly (or whatever we land on as the final dev UX). This means that if someone upgrades to this version and wants to downgrade to 0.3-, they will first need to downgrade to an LDK version from Stage 2 so the manager gets written out as it is in main today, before downgrading further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plan looks good to me. Only the following q:
Why do we first want to merge all the desired reconstruction logic? Is that to avoid multiple "reconstruct_from_monitors" flags? If that's the reason, perhaps it is worth to have multiple flags, so that we can skip stage 1 and keep the various reconstruction changes independent.
We are working on removing the requirement of regularly persisting the ChannelManager, and as a result began reconstructing the manager's forwards maps from Channel data on startup in a recent PR, see cb398f6 and parent commits. At the time, we implemented ChannelManager::read to prefer to use the newly reconstructed maps, partly to ensure we have test coverage of the new maps' usage. This resulted in a lot of code that would deduplicate HTLCs that were present in the old maps to avoid redundant HTLC handling/duplicate forwards, adding extra complexity. Instead, prefer to use the old maps if they are present (which will always be the case in prod, for now), but avoid writing the legacy maps in test mode so tests will always exercise the new paths.
c6bb096 to
425747e
Compare
| info.prev_funding_outpoint == prev_hop_data.outpoint | ||
| && info.prev_htlc_id == prev_hop_data.htlc_id | ||
| }; | ||
| for (htlc_source, (htlc, preimage_opt)) in monitor.get_all_current_outbound_htlcs() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would re-arranging the loop in a separate commit make the diff easier?
lightning/src/ln/channelmanager.rs
Outdated
| #[cfg(not(any(test, feature = "_test_utils")))] | ||
| let reconstruct_manager_from_monitors = false; | ||
| #[cfg(any(test, feature = "_test_utils"))] | ||
| let reconstruct_manager_from_monitors = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plan looks good to me. Only the following q:
Why do we first want to merge all the desired reconstruction logic? Is that to avoid multiple "reconstruct_from_monitors" flags? If that's the reason, perhaps it is worth to have multiple flags, so that we can skip stage 1 and keep the various reconstruction changes independent.
Addresses a chunk of the feedback from #4227 (review) (#4280). Splitting it out for ease of review.