Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id D4867BE2 for ; Wed, 9 Mar 2016 23:24:18 +0000 (UTC) X-Greylist: from auto-whitelisted by SQLgrey-1.7.6 Received: from heron.directrouter.co.uk (heron.directrouter.co.uk [89.145.69.228]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id ED86B123 for ; Wed, 9 Mar 2016 23:24:17 +0000 (UTC) Received: from host81-151-121-25.range81-151.btcentralplus.com ([81.151.121.25]:57415 helo=[192.168.1.82]) by heron.directrouter.co.uk with esmtpsa (TLSv1:ECDHE-RSA-AES256-SHA:256) (Exim 4.86_1) (envelope-from ) id 1adnSh-000Eto-GV; Wed, 09 Mar 2016 23:24:15 +0000 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 9.2 \(3112\)) From: Dave Hudson In-Reply-To: <20160309202135.GC4388@mcelrath.org> Date: Wed, 9 Mar 2016 23:24:15 +0000 Content-Transfer-Encoding: quoted-printable Message-Id: References: <201603021456.15820.luke@dashjr.org> <5E6E8EFD-2BC0-47F6-8005-5A63821C4276@hashingit.com> <20160308220507.GA4388@mcelrath.org> <26355E0C-1DDC-4CBD-A044-788C9C135EA6@hashingit.com> <20160309202135.GC4388@mcelrath.org> To: Bob McElrath X-Mailer: Apple Mail (2.3112) X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - heron.directrouter.co.uk X-AntiAbuse: Original Domain - lists.linuxfoundation.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - hashingit.com X-Get-Message-Sender-Via: heron.directrouter.co.uk: authenticated_id: dave@hashingit.com X-Authenticated-Sender: heron.directrouter.co.uk: dave@hashingit.com X-Source: X-Source-Args: X-Source-Dir: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org X-Mailman-Approved-At: Wed, 09 Mar 2016 23:37:52 +0000 Cc: bitcoin-dev@lists.linuxfoundation.org Subject: Re: [bitcoin-dev] Hardfork to fix difficulty drop algorithm X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Development Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Mar 2016 23:24:19 -0000 > On 9 Mar 2016, at 20:21, Bob McElrath = wrote: >=20 > Dave Hudson [dave@hashingit.com] wrote: >> A damping-based design would seem like the obvious choice (I can = think of a >> few variations on a theme here, but most are found in the realms of = control >> theory somewhere). The problem, though, is working working out a = timeframe >> over which to run the derivative calculations. >=20 > =46rom a measurement theory perspective this is straightforward. Each = block is a > measurement, and error propagation can be performed to derive an error = on the > derivatives. Sure, but I think there are 2 problems: 1) My guess is that errors over anything but a long period are probably = too large to be very useful. 2) We don't have a strong notion of time that is part of the consensus. = Sure, blocks have timestamps but they're very loosely controlled (can't = be more than 2 hours ahead of what any validating node thinks the time = might be). Difficulty can't be calculated based on anything that's not = part of the consensus data. > The statistical theory of Bitcoin's block timing is known as a Poisson = Point > Process: https://en.wikipedia.org/wiki/Poisson_point_process or = temporal point > process. If you google those plus "estimation" you'll find a metric = shit-ton of > literature on how to handle this. Strictly it's a non-homogeneous Poisson Process, but I'm pretty familiar = with the concept (Google threw one of my own blog posts back at me: = http://hashingit.com/analysis/27-hash-rate-headaches, but I actually = prefer this one: http://hashingit.com/analysis/30-finding-2016-blocks = because most people seem to find it easier to visualize). >> The problem is the measurement of the hashrate, which is pretty = inaccurate at >> best because even 2016 events isn't really enough (with a completely = constant >> hash rate running indefinitely we'd see difficulty swings of up to = +/- 5% even >> with the current algorithm). In order to meaningfully react to a = major loss >> of hashing we'd still need to be considering a window of probably 2 = weeks. >=20 > You don't want to assume it's constant in order to get a better = measurement. > The assumption is clearly false. But, errors can be calculated, and = retargeting > can take errors into account, because no matter what we'll always be = dealing > with a finite sample. Agreed, it's a thought experiment I ran in May 2014 = (http://hashingit.com/analysis/28-reach-for-the-ear-defenders). I found = that many people's intuition is that there would be little or no = difficulty changes in such a scenario, but the intuition isn't reliable. = Given a static hash rate the NHPP behaviour introduces a surprisingly = large amount of noise (often much larger than any signal over a period = of even weeks). Any measurements in the order of even a few days has so = much noise that it's practically unusable. I just realized that unlike = some of my other sims this one didn't make it to github; I'll fix that = later this week. Cheers, Dave=