From r.pickhardt at googlemail.com Mon Mar 14 17:46:57 2022 From: r.pickhardt at googlemail.com (=?UTF-8?Q?Ren=C3=A9_Pickhardt?=) Date: Mon, 14 Mar 2022 18:46:57 +0100 Subject: [Lightning-dev] Code for sub second runtime of piecewise linarization to quickly approximate the minimum convex cost flow problem (makes fast multi part payments with large amounts possible) In-Reply-To: References: Message-ID: Dear Carsten and fellow lightning developers, thanks for going into such detail and discovering some of the minor inaccuracies of my very rough piecewise linearization! On Mon, Mar 14, 2022 at 1:53 PM Carsten Otto wrote: > 1.2) The Mission Control information provided by lnd [...] > > I think you talk a about a maximum available balance of a channel (and > not > > min available balance)? > > Yes, although MC also has information about "known" amounts (due to > failures that only happened further down the road). > I am unsure how mission control stores and handles that data. In my understanding they are mainly interested in a statistic of the ratio of successfull payments over the past X attempts on a channel given a certain time interval. But I assume they should have all the relevant data to produce a proper conditional proability to utilize our learnt knowledge. In any case from the probabilistic model we can do it mathematically precise by just looking at the conditional probabilities. As said I have written hands on instructions in the rust repo https://github.com/lightningdevkit/rust-lightning/issues/1170#issuecomment-972396747 and they have been fully implemented in https://github.com/lightningdevkit/rust-lightning/pull/1227. Also in our mainnet test and simulations we have updated the priors according to those rules and this revealed the full power of the approach. To summarize: Basically we need to know the effective uncertainty by only looking at the effective amount that goes above the minimum certain liquidity (that we might know from a prior attempt) and the effective capacity (somebody recently suggested that conditional capacity might be a better wording) > Assuming that routing nodes indeed do so we would have learnt that neither > > channel has an effective capacity of N. So the combined virtual channel > > could be seen as 2N-1. > > You mean 2(N-1) = 2N-2? > Probably though the difference would be neglectable and if I understood you correctly you will just keep parallel channels separate anyway. > > 4) Leftovers after Piecewise Linearization > > I am not sure if I understand your question / issue here. The splitting > > works by selecting N points on the domain of the function and splitting > the > > domain into segments at those points. This should never leave sats over. > > With quantization of 10,000 a channel of size 123,456 ends up as an arc > with a capacity of 12 units. Cutting this into 5 pieces gives us > 5*2 with 2 units not ending up in any of those pieces. Or am I missing > something here, and we should split into 5 pieces of size 2.4 = 12/5? > Your observation is correct! Indeed I think my code rounds down the capacity instead of going to the correct points and using all of the capacity in the segmentation by making some channels 1 unit larger than others which would happen if actually finding points on the domain to build the segments. This could easily be fixed. However as always: Fully saturated channels mean very low probabilities so even in my situation where I may cut off a significant part of the channel I'd say in the extreme case where we would need to saturate even those sats the flow will and should most likely fail as the min cust is probably just lower than the amount we would attempt to send. Probably opening a new channel or doing an on chain transaction will be more useful. Though of course we should build the piecewise linearization correctly by the end of the day without throughing away some capacity. > If the quantization however makes a channel so small that we cannot > > even create 5 (or N) disjoint segments then I guess the likelihood for > > being included into the final result is too small anyway. > > It may not be very likely, but flat-out ignoring 20k sat (in my > contrived example above) or up to 4*quantization sats (which is the case > you described) doesn't feel right. > See above. I agree it is not 100% accurate. but in practice I doubt it would ever become a problem as this will only be an issue when the payment amount is very close to the min cut which would make flows so unlikely to begin with that we would use other ways to conduct the payment anyway. witch kind regards Rene -------------- next part -------------- An HTML attachment was scrubbed... URL: