Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 97D8515DA for ; Tue, 12 Jun 2018 01:05:32 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-ot0-f180.google.com (mail-ot0-f180.google.com [74.125.82.180]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id A0BEEEC for ; Tue, 12 Jun 2018 01:05:31 +0000 (UTC) Received: by mail-ot0-f180.google.com with SMTP id a5-v6so26099140otf.12 for ; Mon, 11 Jun 2018 18:05:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=sEO/8M+k4FyDsCkhAkzmHDVEJ86xJmPUTFSmVd95qvI=; b=T7CN0of0YO6s6+MEltyi5uZFSiv4HcNnxeLLWY4zPnseXtrss8KDlNt0RxeZH7tpDX 3IqJBxEbJf+Zqro5FckMcTTOYoYA0IzVVtfuqyiRqOZHoGm6UgYES8wAj8gDObdMb2/o 1tb58akqMWhE1C+DRvFHeA353R633wFUm9/tDDvpzMtHOoOTA7bBLv7Ah/b6LkWSs0gt LGbhZbLwjKcmrD0fLyPgpAEYyud4jjfUOOcGVkMe2rp9Ih0oZnP/we1ZwdZNfgRVNKdj c16lC9feuRBy2J+VPQ1FfhUluGf1yVTgJ50IEPJOS5xyQ1/0v8Ea3r+WEidCYYxriFKH QH7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=sEO/8M+k4FyDsCkhAkzmHDVEJ86xJmPUTFSmVd95qvI=; b=W9G+NILGvHWHBknBQjg3XuFPY4WpbAgyYEi7Lw6tpsxQu5c2VVE7x8Gjfr8/zwHwx0 0FEhl+2wOxjR5lQVV2XZG139TnMCXwdVDG0Xvb3en7+9+zCzp6i72sNm8/wrOKTI7yKJ ygJPGXipsc/DRpNrRVhSpRTS7GQvcsPHi3Xbb916kWB3KbsR8Y/LnOl4wAlFrbJT+6KP QC7pM5gC9QoUl11Mn3vrv0Z37VM0SWnh+V97ephEdRglY77elX/sitEUT4svv5TGqAvv y7WukGHp6GYjBPgaYnueJRFyfxCs5QLCK5ROAb1a3qfDWD8QU2FLZrzXZGoTR+mc8vIK B6hA== X-Gm-Message-State: APt69E0YJWU6TXNQAewrVnnlCqGpsw5pmQEXropmAiYnmPke3xsnrnHV gLrgPjGgW/QYjD1viAjryfSycec+jPrFNeqYDZs= X-Google-Smtp-Source: ADUXVKIXUrXNntkwR2/giRpNHV6bqB47LGM8W3u7Rz9dNvMiKaZJiGWUUP9z7ikIO13JnxK5oBi4sKe2TZnxGZ3AcO0= X-Received: by 2002:a9d:3d76:: with SMTP id a109-v6mr1072245otc.294.1528765530683; Mon, 11 Jun 2018 18:05:30 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Pieter Wuille Date: Mon, 11 Jun 2018 18:05:14 -0700 Message-ID: To: Bradley Denby , Bitcoin Dev Content-Type: multipart/alternative; boundary="00000000000066bf5c056e677688" X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, HTML_MESSAGE, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: Re: [bitcoin-dev] BIP proposal - Dandelion: Privacy Preserving Transaction Propagation X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Jun 2018 01:05:32 -0000 --00000000000066bf5c056e677688 Content-Type: text/plain; charset="UTF-8" On Mon, Jun 11, 2018, 07:37 Bradley Denby via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > Thanks for the comments Pieter! > > We can make descriptions for the intended node behaviors more clear in the > BIP. > > Regarding interaction with BIPs 37 and 133, we have found that if > Dandelion routing decisions are based on self-reported features, malicious > nodes can often exploit that to launch serious deanonymization attacks. As > a result, we recommend not allowing fee filters from peers to influence the > choice of route. Your suggestion of automatically fluffing is a good > solution. Another (similar) option would be to apply fee filters in the > stempool. This would prevent the tx from propagating in stem phase, so > eventually an embargo timer on the stem will expire and the transaction > will fluff. This is slower than auto-fluffing, but requires (slightly) less > code. > I understand the argument about not making routing decisions based on self-reported features, but I would expect it to only matter if done selectively? Allowing a node to opt out of Dandelion entirely should always be possible regardless - as they can always indicate not supporting it. The reason for my suggestion was that most full nodes on the network use feefilter, while only (from the perspective of Dandelion uninteresting) light nodes and blocksonly nodes generally use Bloom filters. Just dropping stem transactions that would otherwise be sent to a Dandelion peer which fails its filter, and relying on embargo seems fine. But perhaps this option is something to describe in the BIP ("Nodes MAY choose to either drop stem transactions or immediately start diffusion when a transaction would otherwise be sent to a Dandelion node whose filter is not satisfied for that transaction. A node SHOULD NOT make any routing decisions based on the transaction itself, and thus SHOULD NOT try to find an alternative Dandelion node to forward to" for example). Regarding mempool-dependent transactions, the reference implementation adds > any mempool transactions to the stempool but not vice-versa so that the > stempool becomes a superset of the mempool. In other words, information is > free to flow from the mempool to the stempool. Information does not flow > from the stempool to the mempool except when a transaction fluffs. As a > result, a node's stempool should accept and propagate Dandelion > transactions that depend on other unconfirmed normal mempool transactions. > The behavior you described is not intended; if you have any tests > demonstrating this behavior, would you mind sharing them? > Oh, I see! I was just judging based on the spec code you published, but I must have missed this. Yes, that makes perfect sense. There may be some issues with this having a significant impact on stempool memory usage, but let's discuss this later on implementation. Orphans: stem orphans can occur when a node on the stem shuffles its route > between sending dependent transactions. One way to deal with this issue > would be to re-broadcast all previous Dandelion transactions that have not > been fluffed after Dandelion route shuffling. This could add a fair amount > of data and logic. This re-broadcast method also telegraphs the fact that a > Dandelion shuffle has taken place and can result in bursts of transactions > depending on traffic patterns. A second option (which we used in the > reference implementation) is to wait for the fluff phase to begin, at which > point the orphans will be resolved. This should happen within 15 seconds > for most transactions. Do you have any thoughts on which option would be > more palatable? Or if there are other options we have missed? > Another option (just brainstorming, I may be missing something here), is to remember which peer each stempool transaction was forwarded to. When a dependent stem transaction arrives, it is always sent to (one of?) the peers its dependencies were sent to, even if a reshuffle happened in between. Thinking more about it, relying on embargo is probably fine - it'll just result in slightly lowered average stem length, and perhaps multiple simultaneous fluffs starting? Regarding preferred connections, we have found that making Dandelion > routing decisions based on claims made by peer nodes can cause problems and > therefore would recommend against biasing the peer selection code. > Oh, I don't mean routing decisions, but connections in general. On the implementation side: > Let's discuss these later. > Based on the feedback we have received so far, we are planning to > prioritize writing up a clearer spec for node behavior in the BIP. Does > that seem reasonable, or are there other issues that are more pressing at > this point? > I think that's the primary thing to focus on at this point, but perhaps others on this list feel different. Cheers, -- Pieter --00000000000066bf5c056e677688 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable





Orphans: stem orphans can occur when a node on the = stem shuffles its route between sending dependent transactions. One way to = deal with this issue would be to re-broadcast all previous Dandelion transa= ctions that have not been fluffed after Dandelion route shuffling. This cou= ld add a fair amount of data and logic. This re-broadcast method also teleg= raphs the fact that a Dandelion shuffle has taken place and can result in b= ursts of transactions depending on traffic patterns. A second option (which= we used in the reference implementation) is to wait for the fluff phase to= begin, at which point the orphans will be resolved. This should happen wit= hin 15 seconds for most transactions. Do you have any thoughts on which opt= ion would be more palatable? Or if there are other options we have missed?<= /div>

Another option (just brainstorming, I may be missing something here)= , is to remember which peer each stempool transaction was forwarded to. Whe= n a dependent stem transaction arrives, it is always sent to (one of?) the = peers its dependencies were sent to, even if a reshuffle happened in betwee= n.

Thinking more about i= t, relying on embargo is probably fine - it'll just result in slightly = lowered average stem length, and perhaps multiple simultaneous fluffs start= ing?

Regarding pref= erred connections, we have found that making Dandelion routing decisions ba= sed on claims made by peer nodes can cause problems and therefore would rec= ommend against biasing the peer selection code.

Oh, I don't mean= routing decisions, but connections in general.

=
On the implementation side:

= Let's discuss these later.


Based on the feedback we have received so far,= we are planning to prioritize writing up a clearer spec for node behavior = in the BIP. Does that seem reasonable, or are there other issues that are m= ore pressing at this point?=C2=A0

I think that's the primary thi= ng to focus on at this point, but perhaps others on this list feel differen= t.

Cheers,

--=C2=A0
Piete= r

--00000000000066bf5c056e677688--