Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 2FB538CC for ; Thu, 5 Nov 2015 09:23:49 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-lb0-f174.google.com (mail-lb0-f174.google.com [209.85.217.174]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id D8415EE for ; Thu, 5 Nov 2015 09:23:46 +0000 (UTC) Received: by lbbkw15 with SMTP id kw15so25094149lbb.0 for ; Thu, 05 Nov 2015 01:23:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=e9RIx5GKok9kLeLmaY+SD4h3TeNREOCEaRuHbsDTw38=; b=MRN+FUSqfe07hv1G8U/7bomw28z18bg01GIJzjzYxQpyjrSb1jr5dyO8MrxhowfYSp Bz0DqoI7ohEtIarnERXJsdwehOdRLDEaICCcsM+EuxJWlDE5ORRpDM54wUqNY91dqW+k TAL1wfzccT5+bEzjImNrWVpvMsO4S1zVUIh93U5/IY6CRzhDA2yK5E5vU9UuDe2bSLLi eQA6l47GWIBlJ+hBm3rv2iKSVbbGfVuYqU39IEl2OCRct9Mm5nAemLY0LIPkEb6mNr3C MlmsQmXOvf6HOC4nvSz00zBchjAGQX45f48ruYy1kHG1NfO7FA6Cdu29dZp/fEo8Y2Oe FupQ== MIME-Version: 1.0 X-Received: by 10.112.205.194 with SMTP id li2mr3150380lbc.75.1446715424935; Thu, 05 Nov 2015 01:23:44 -0800 (PST) Received: by 10.25.22.95 with HTTP; Thu, 5 Nov 2015 01:23:44 -0800 (PST) In-Reply-To: References: Date: Thu, 5 Nov 2015 09:23:44 +0000 Message-ID: From: Gavin Andresen To: Mark Friedenbach Content-Type: multipart/alternative; boundary=001a11c3ce56fe965d0523c7ae07 X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Bitcoin Dev Subject: Re: [bitcoin-dev] A validation-cost metric for aggregate limits and fee determination X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Development Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Nov 2015 09:23:49 -0000 --001a11c3ce56fe965d0523c7ae07 Content-Type: text/plain; charset=UTF-8 I have several thoughts: Weighing CPU validation cost should be reasonably straightforward-- just pick some arbitrary, commonly-available, recent hardware and then benchmark the two things that take the bulk of validation time (hashing to create the signature hash, then ECDSA validation), and weigh the terms in the validation cost equation appropriately (e.g. hashing X GB of data takes the same amount of CPU time as one libsecp256k1 validation, so count cpu cost of an OP_CHECKSIG as 1 + X/actual_bytes_hashed). But how should bandwidth cost be counted? There isn't an obvious "Y GB of bandwidth-per-month equals 1 ECDSA validation. We need to find common units for the terms in the validation cost equation for it to make sense, otherwise we're adding apples and oranges. I think the only units that will work is "percentage of maximum validation ability for some reference hardware running with a network connection capable of some reference bandwidth." For example, imagine the reference was the typical home computer being sold today running with some multiple or fraction of the average global broadband connection speed of 5Mbps. CPU cost to validate a block can then be expressed as a percentage of maximum capacity, as can bandwidth-- hooray, two metrics with the same units, so they can be added up. If the result is less than 100%, then the block is valid-- it can be received and validated in a reasonable amount of time. Rolling in UTXO growth is harder, for two reasons: 1) UTXO changes per block can be negative or positive, as opposed to bandwidth/CPU costs. 2) It is not clear how to choose or benchmark "reference UTXO growth" (1) could be finessed to just treat UTXO shrinkage as zero. (2) could just be decided by picking a reasonable growth number. Since we want the UTXO set to fit into main memory, something a bit below the long-ish term price/performance trend of main memory would be a good target. So, starting with that growth rate and an initial UTXO size in bytes, divide by the number of blocks in a year to get a maximum UTXO growth in bytes per block. When validating a block, take the actual UTXO growth, express it as a percentage of the maximum allowed (make it zero if it is negative), and combine with the CPU and bandwidth percentages. If the total is less than 100%, block is valid. Otherwise, invalid. ---------------- Now.... all of that worked through, I'm not 100% sure it solves the "do miners or wallet have to solve a bin-packing problem to determine which transactions to put into their blocks or what fees to attach." I think it mostly works out-- instead of fee-per-kilobyte, it would be fee-per-validation-cost (which is in the weird units "fraction of 100% validation cost"). But the UTXO term might be a problem-- transactions that create more UTXOs than they spend might end up being costly. I'm traveling right now, perhaps somebody could pick some arbitrary reference points and try to get a rough idea of what different transactions might pay in fees (e.g. if a one-input-two-output had a cost of X, two-output-one-input would have a cost of X/something). I'm not convinced that a single validation cost metric is the best approach-- it might be better to break the cost into three (UTXO growth, CPU, and bandwidth) and just let miners set reasonable transaction selection policies that keep each of the three under whatever caps are imposed on each. If a miner comes up with a clever algorithm that lets them pack in more transactions and get more fees, good for them! But I do like the simplicity of a single validation cost metric. -- -- Gavin Andresen --001a11c3ce56fe965d0523c7ae07 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I have several thoughts:

Weighing CPU v= alidation cost should be reasonably straightforward-- just pick some arbitr= ary, commonly-available, recent hardware and then benchmark the two things = that take the bulk of validation time (hashing to create the signature hash= , then ECDSA validation), and weigh the terms in the validation cost equati= on appropriately (e.g. hashing X GB of data takes the same amount of CPU ti= me as one libsecp256k1 validation, so count cpu cost of an OP_CHECKSIG as 1= + X/actual_bytes_hashed).

But how should bandwidt= h cost be counted? There isn't an obvious "Y GB of bandwidth-per-m= onth equals 1 ECDSA validation. We need to find common units for the terms = in the validation cost equation for it to make sense, otherwise we're a= dding apples and oranges.

I think the only units t= hat will work is "percentage of maximum validation ability for some re= ference hardware running with a network connection capable of some referenc= e bandwidth."

For example, imagine the refere= nce was the typical home computer being sold today running with some multip= le or fraction of the average global broadband connection speed of 5Mbps. C= PU cost to validate a block can then be expressed as a percentage of maximu= m capacity, as can bandwidth-- hooray, two metrics with the same units, so = they can be added up.=C2=A0 If the result is less than 100%, then the block= is valid-- it can be received and validated in a reasonable amount of time= .


Rolling in UTXO growth is harder,= for two reasons:
1) UTXO changes per block can be negative or po= sitive, as opposed to bandwidth/CPU costs.
2) It is not clear how= to choose or benchmark "reference UTXO growth"=C2=A0
<= br>
(1) could be finessed to just treat UTXO shrinkage as zero.
(2) could just be decided by picking a reasonable growth number. S= ince we want the UTXO set to fit into main memory, something a bit below th= e long-ish term price/performance trend of main memory would be a good targ= et.

So, starting with that growth rate and an init= ial UTXO size in bytes, divide by the number of blocks in a year to get a m= aximum UTXO growth in bytes per block.

When valida= ting a block, take the actual UTXO growth, express it as a percentage of th= e maximum allowed (make it zero if it is negative), and combine with the CP= U and bandwidth percentages.

If the total is less = than 100%, block is valid. Otherwise, invalid.

---= -------------

Now.... all of that worked through, = I'm not 100% sure it solves the "do miners or wallet have to solve= a bin-packing problem to determine which transactions to put into their bl= ocks or what fees to attach."

I think it most= ly works out-- instead of fee-per-kilobyte, it would be fee-per-validation-= cost (which is in the weird units "fraction of 100% validation cost&qu= ot;).

But the UTXO term might be a problem-- trans= actions that create more UTXOs than they spend might end up being costly. I= 'm traveling right now, perhaps somebody could pick some arbitrary refe= rence points and try to get a rough idea of what different transactions mig= ht pay in fees (e.g. if a one-input-two-output had a cost of X, two-output-= one-input would have a cost of X/something).

I'= ;m not convinced that a single validation cost metric is the best approach-= - it might be better to break the cost into three (UTXO growth, CPU, and ba= ndwidth) and just let miners set reasonable transaction selection policies = that keep each of the three under whatever caps are imposed on each. If a m= iner comes up with a clever algorithm that lets them pack in more transacti= ons and get more fees, good for them!

But I do lik= e the simplicity of a single validation cost metric.

--
--
Gavin = Andresen
--001a11c3ce56fe965d0523c7ae07--