Skip to content

Conversation

@antiochp
Copy link
Member

@antiochp antiochp commented Jan 11, 2019

Still very much a WIP. Do not merge before mainnet...

  • rewrite Dandelion logic (WIP) to follow Dandelion++
    • add concept of a node epoch
    • relay immediately to next stem relay when in stem mode
    • node in fluff mode for a given epoch will collect and aggregate txs
    • get rid of pool entry states, ToStem etc. No longer necessary to track this state.
  • less cloning when adding txs to the pool
  • pass slices around and not vecs directly
  • other assorted cleanup around tx handling

@antiochp
Copy link
Member Author

antiochp commented Feb 5, 2019

Just had a thought. Somebody asked about fees and how Dandelion affects fees on gitter.

If we implement Dandelion++ then we effectively delay any aggregation until the txs hit a node that is acting in fluff mode for the duration of its current epoch.
This has the nice effect of allowing the node to collect all the txs, bucket them by fee (somehow, rules tbd) and then aggregate the in a way that minimizes any gaming of the fees.
i.e. We could potentially aggregate high fee txs separately to low fee txs.

And we can decide to exclude some txs from aggregation if fees are too low, without impacting other txs currently scheduled for aggregation and fluffing.

@antiochp antiochp self-assigned this Feb 7, 2019
@antiochp antiochp force-pushed the dandelion_plus_plus branch from d3630fa to 1b5b32b Compare February 7, 2019 14:30
@JeremyRubin
Copy link
Contributor

I see that this is in conflict with #2548 -- it may be worth it to take #2548 if this will be WIP for a while, but otherwise, #2548 can be dropped.

I think that the delaying of aggregation sounds unlikely to work from an incentive PoV -- aren't nodes fundamentally likely to want to aggregate in their own lower fee transactions into higher fee transactions passing by?

@antiochp
Copy link
Member Author

antiochp commented Feb 18, 2019

I think that the delaying of aggregation sounds unlikely to work from an incentive PoV -- aren't nodes fundamentally likely to want to aggregate in their own lower fee transactions into higher fee transactions passing by?

Once fluffed they have no incentive to do so as other nodes will "deaggregate" and undo anything they just aggregated. If any other node has seen the original unaggregated tx then they can deaggregate and recover your tx.

The only time this would be effective is if you do this during stem phase (which is probably what you were saying).
In stem phase you can and there is nothing to prevent this.
Nodes will be free to aggregate or not aggregate as they decide.
But they can also decide to simply not pass any tx on to the next relay node as well - so there are easier (and cheaper) ways of causing delays to existing txs.

So if you are a node operator you could wait for a stem tx (with high fees) to pass through your node to try and take advantage of the aggregation (to save on fees) - but this would be an issue whether we aggregated by default on each stem node (Dandelion) or only on the fluffing node (Dandelion++).

If you decide to send a tx via Dandelion with high fees then a couple of things happen -

  1. You effectively sacrifice the fee because your tx is not going to confirm any faster
  2. You are at risk of other nodes taking advantage and aggregating low fee txs with it
    • But this effectively increases your anonymity/privacy so maybe its worth the additional fees?

Maybe nodes will keep their own low fees (low-priority) txs around waiting for high fee stem txs to pass by - but maybe that's ok as the benefits (privacy/anonymity) potentially offset any downsides.

One thing we do need to solve is what is actually permitted (in the consensus rules) in terms of range of permissible fees (per kernel) in a multi-kernel aggregated tx.
i.e. maybe it is invalid to have a kernel with fee=100 aggregated with another kernel with fee=10,000.
Only similar fees (per unit of weight) can be aggregated together and be valid, within some range.

@antiochp antiochp force-pushed the dandelion_plus_plus branch from 1b5b32b to c6c4ecb Compare February 18, 2019 14:40
This was referenced Feb 21, 2019
@antiochp
Copy link
Member Author

Closing. Replaced with #2628.

@antiochp antiochp closed this Feb 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants