Return-Path: Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 5F7FAC0177 for ; Tue, 24 Mar 2020 13:01:14 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 5874E85FB2 for ; Tue, 24 Mar 2020 13:01:14 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7PTtpRuQGjiJ for ; Tue, 24 Mar 2020 13:01:13 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mail-io1-f51.google.com (mail-io1-f51.google.com [209.85.166.51]) by fraxinus.osuosl.org (Postfix) with ESMTPS id 0311185EF1 for ; Tue, 24 Mar 2020 13:01:13 +0000 (UTC) Received: by mail-io1-f51.google.com with SMTP id m15so12870276iob.5 for ; Tue, 24 Mar 2020 06:01:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=KFqpXKmPFKdNT+pqy4vLkyAWqayCa54XxGuGH+HsvNg=; b=nGlffQLDQnVK+WbFkXFltdVhJmPjNxxRYhqk6sScWduFClQ9UEDLzsRXhjkz4Q1cSx Am1WXBnQbO2VJdSllBatmhYj4FZPfHVDm6aWOji3RWcbtSr85E/NOw1iJf4x3HUAZdpJ JeWg2yuPEy3gKq3vN1SObVelV5sWw7kPEbC4xoQwyVbSDatgKJ8NzaaPspGEbl766Zl7 t6dtEDoZiQ1027zN/Y95SM+MLP4fb/MrT8J/C8DN2+LErjMzaXKZpw4s1skhV1x/P357 D1K2+1kS4TaSLtLiglJvs+NnvXjMOmrIOWTn8FJEc/nl1liuAiDNmSkGkddK3AFGQMl1 Gcjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=KFqpXKmPFKdNT+pqy4vLkyAWqayCa54XxGuGH+HsvNg=; b=HOBvLbG50p51DfYr4rI6+Dd77WIewKmFzT/CDGxjtPnm/0dBFHKVZap1YCW+N1jgmW sTegMP/F7sfPeZvObNdEbMZvonm26ac6jOVjvZ51kYvGD6j2Ctn3NZfVNkLCdBIfBozU LeqbpIaP9wUys2/RyNdhR5yrHJ+poRt1/LzB67VdHL2Vq9pwgopnrZjyeLQyITZvJu9K Mbo/31+PABd3IrOda/g3ScZApELfZ7nXCLWqPEOq1hC/1jb9nAn6e7VikPzOHx3Cd6bw Qpt4h6XTI0H5zBjtkO+7KoamvZoQu7yT/v0BNnvszcpUlaJzClUnYu8QyByl6dW9wK7N ojFA== X-Gm-Message-State: ANhLgQ3ifsB9GpZgBHynrMLK6RmG5hCQuKLAvOrf5hfxrGFy5p7jIw4b lrBxRX1CodGKJlK23cTKhqUIVbS6i1eYbeh8uc04pTQu5ZE= X-Google-Smtp-Source: ADFU+vuWecvS9BezXaVdTlJQ72XyvJC+T5CPUnU65+QFIwoFMtxnzwsrP70/r1w8X/N/Da98UOlc+2Ep4ERnGPa+mYs= X-Received: by 2002:a6b:b747:: with SMTP id h68mr23413583iof.105.1585054871852; Tue, 24 Mar 2020 06:01:11 -0700 (PDT) MIME-Version: 1.0 From: Lloyd Fournier Date: Wed, 25 Mar 2020 00:00:45 +1100 Message-ID: To: Bitcoin Protocol Discussion Content-Type: multipart/alternative; boundary="000000000000961c4905a19958f6" X-Mailman-Approved-At: Tue, 24 Mar 2020 15:01:16 +0000 Subject: [bitcoin-dev] Mitigating Differential Power Analysis in BIP-340 X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 Mar 2020 13:01:14 -0000 --000000000000961c4905a19958f6 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi List, I felt this topic deserved it's own thread but it follows on from the mailing list post [2] announcing a new PR [1] to change BIP-340 in several ways, including adding random auxiliary data into the nonce derivation function. Rather than hashing the randomness with the secret key and message etc, the randomness is hashed then XOR'd (^) with the secret key and the result is hashed like so to determine the secret nonce k: (1) k =3D H_derive( sec_key ^ H_aux(rand) || pub_key_x || message) The claim made in the mailing list post is that this is more secure against "differential power analysis" (DPA) attacks than just doing the simpler and more efficient: (2) k =3D H_derive(sec_key || rand || pub_key_x || message) The TL;DR here is that I don't think this is the case. There was no citation for this claim, so I did some digging and found two papers that seemed like they might be the origin of the idea [3,4] (I had no idea about these attacks before). A relatively easy to understand explanation of DPA attacks against is in [3]: The fundamental principle behind all DPA attacks is that at some point in > an algorithm=E2=80=99s execution, a function f exists that combines a fix= ed secret > value with a variable which an attacker knows. An attacker can form > hypotheses about the fixed secret value, and compute the corresponding > output values of f by using an appropriate leakage model, such as the > Hamming Distance model. The attacker can then use the acquired power > consumption traces to verify her hypotheses, by partitioning the > acquisitions or using Pearson=E2=80=99s correlation coefficient. These si= de-channel > analysis attacks are aided by knowledge of details of the implementation > under attack. Moreover, these attacks can be used to validate hypotheses > about implementation details. In subsequent sections, these side-channel > analysis attacks are referred to as DPA attacks. For example, in the original BIP-340 proposal the nonce derivation was vulnerable to DPA attacks as it was derived simply by doing H_derive(sec_key || message). Since, the message is known to the attacker and variable (even if it is not controller by her), the SHA256 compression function run on (sec_key || message) may leak information about sec_key. It is crucial to understand that just hashing sec_key before passing it into the H_derive does *not* fix the problem. Although the attacker would be unable to find sec_key directly, they could learn H(sec_key) and with that know all the inputs into H_derive and therefore get the value of the secret nonce k and from there extract the secret key from any signature made with this nonce derivation algorithm. The key thing I want to argue with this post is that there is no advantage of (1) over (2) against DPA attacks, at least not given my understanding of these papers. The way the attack in [3] works is by assuming that operations in the compression function leak the "hamming distance" [5] (HD) between the static secret thing that is being combined with the variable public thing. In practice the attack involves many particulars about SHA256 but that is, at a high level, the right way to simplify it I think. The way the paper suggests to fix the problem is to mask the secret data with secret randomness before each sensitive operation and then strip off the secret randomness afterwards. This seems to be the inspiration for the structure of updated BIP-340 (1), however I don't believe that it provides any extra protection over (2). My argument is as follows: Claim A: If the randomness used during signing is kept secret from the attacker then (2) is secure against DPA. Since SHA256 has 64-byte blocks the hash H_derive(sec_key || rand || pub_key_x || message) will be split up into two 64 byte blocks, one containing secret data (sec_key || rand) and the other containing data known to the attacker (pub_key_x || message). The compression function will run on (sec_key || rand) but DPA will be useless here because the HD(sec_key, rand) will contain no information about sec_key since rand is also secret. The output of the compression function on the first block will be secret but *variable* so the intermediate hash state will not reveal useful information when compressed with the second block. Then I thought perhaps (1) is more robust in the case where the randomness is known by the attacker (maybe the attacker can physically modify the chipset to control the rng). We'd have to assume that the sec_key ^ H_aux(rand) isn't vulnerable to DPA (the LHS is under the control of the attacker) to be true. Even under this assumption it turned out not to be the case: Claim B: If the randomness used during signing is known to the attacker, then (1) is not secure against DPA. In (1) there are 96 bytes to be hashed and therefore two SHA256 blocks: (H_aux(sec_key) ^ rand || pub_key_x) and (message). During the first compression function call the attacker gets the HD of: HD( sec_key ^ H_aux(rand), pub_key_x) which is equal to the following as applying the same XOR to both sides does not change the HD. HD(sec_key, H_aux(rand) ^ pub_key_x) Since the LHS is secret and static, and the RHS is variable and known to the adversary we have a successful DPA attack -- the attacker will learn sec_key after enough runs. Maybe it's just a general rule if you can't produce randomness hidden to the attacker then no defence is possible against DPA but I wanted to check this anyway. My conclusion from this is that (2) is preferable to (1) because it is simpler and more efficient (it has one less SHA256 compression run) and no less secure against DPA (in this model). This is not really my area so perhaps there is a justification for (1) over (2) that I don't understand yet. If so, someone needs to write it down! If not then I think changing the proposal to (2) is preferable. Cheers, LL [1] https://github.com/bitcoin/bips/pull/893 [2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/01763= 9.html [3] http://www.oocities.org/mike.tunstall/papers/MTMM07.pdf [4] https://www.cryptoexperts.com/sbelaid/articleHMAC.pdf [5] https://en.wikipedia.org/wiki/Hamming_distance --000000000000961c4905a19958f6 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi List,

<= div>I felt this topic deserved it's own thread but it follows on from t= he mailing list post [2] announcing a new PR [1] to change BIP-340 in sever= al ways, including adding random auxiliary=C2=A0data into the nonce derivat= ion=C2=A0function. Rather than hashing the randomness with the secret key a= nd message etc, the randomness is hashed then XOR'd (^) with the secret= key and the result is hashed like so to determine the secret nonce k:

(1) k =3D H_derive( sec_key ^ H_aux(rand) || pub_key_x= || message)

The claim made in the mailing list po= st is that this is more secure against "differential power analysis&qu= ot; (DPA) attacks than just doing the simpler and more efficient:

(2) k =3D H_derive(sec_key || rand || pub_key_x || message)=

The TL;DR here is that I don't think this is = the case.

There was no citation for this claim, so= I did some digging and found two papers that seemed like they might be the= origin of the idea [3,4] (I had no idea about these attacks before). A rel= atively easy to understand explanation of DPA attacks against is in [3]:

The fun= damental principle behind all DPA attacks is that at some point in an algorithm=E2=80=99s execution, a function f exists that combines a fixed= secret value with a variable which an attacker knows. An attacker can form hypotheses ab= out the fixed secret value, and compute the corresponding output values of f by using an appropriate leakage model, such as the Hamming Distance model. The attacker can then use the acquired power consumption traces to verify her hypotheses, by partitioning the acquisitions or using Pearson=E2=80=99s= correlation coefficient. These side-channel analysis attacks are aided by knowledge of = details of the implementation under attack. Moreover, these attacks can be used to validate hypotheses about implementation details. In subsequent sections, t= hese side-channel analysis attacks are referred to as DPA attacks.
<= div>
For example, in the original BIP-340 proposal the nonce = derivation was vulnerable to DPA attacks as it was derived simply by doing = H_derive(sec_key || message). Since, the message is known to the attacker a= nd variable (even if it is not controller by her), the SHA256 compression f= unction run on (sec_key || message) may leak information about sec_key. It = is crucial to understand that just hashing sec_key before passing it into t= he H_derive does *not* fix the problem. Although the attacker would be unab= le to find sec_key directly, they could learn H(sec_key) and with that know= all the inputs into H_derive and therefore get the value of the secret non= ce k and from there extract the secret key from any signature made with thi= s nonce derivation algorithm.

The key thing I want= to argue with this post is that there is no advantage of (1) over (2) agai= nst DPA attacks, at least not given my understanding of these papers. The w= ay the attack in [3] works is by assuming that operations in the compressio= n function leak the "hamming distance" [5] (HD) between the stati= c secret thing that is being combined with the variable public thing. In pr= actice the attack involves many particulars about SHA256 but that is, at a = high level, the right way to simplify it I think. The way the paper suggest= s to fix the problem is to mask the secret data with secret randomness befo= re each sensitive operation and then strip off the secret randomness afterw= ards. This seems to be the inspiration for the structure of updated BIP-340= (1), however I don't believe that it provides any extra protection ove= r (2). My argument is as follows:

Claim A: If the = randomness used during signing is kept secret from the attacker then (2) is= secure against DPA.

Since SHA256 has=C2=A064-byte= blocks the hash H_derive(sec_key || rand || pub_key_x || message) will be = split up into two 64 byte blocks, one containing secret data (sec_key || ra= nd) and the other containing data known to the attacker (pub_key_x || messa= ge). The compression function will run on (sec_key || rand) but DPA will be= useless here because the HD(sec_key, rand) will contain no information abo= ut sec_key since rand is also secret. The output of the compression functio= n on the first block will be secret but *variable* so the intermediate hash= state will not reveal useful information when compressed with the second b= lock.

Then I thought perhaps (1) is more robust in= the case where the randomness is known by the attacker (maybe the attacker= can physically modify the chipset to control the rng). We'd have to as= sume that the sec_key ^ H_aux(rand) isn't vulnerable to DPA (the LHS is= under the control of the attacker) to be true. Even under this assumption = it turned out not to be the case:

Claim B: If the = randomness used during signing is known to the attacker, then (1) is not se= cure against DPA.=C2=A0

In (1)=C2=A0 there are 96 = bytes to be hashed and therefore two SHA256 blocks:=C2=A0 (H_aux(sec_key) ^= rand || pub_key_x) and (message). During the first compression function ca= ll the attacker gets the HD of:
HD( sec_key ^ H_aux(rand),=C2=A0 = pub_key_x)
which is equal to the following as applying the same X= OR to both sides does not change the HD.
HD(sec_key, H_aux(rand) = ^ pub_key_x)
Since the LHS is secret and static, and the RHS is v= ariable and known to the adversary we have a successful DPA attack -- the a= ttacker will learn sec_key after enough runs.

= Maybe it's just a general rule if you can't produce randomness hidd= en to the attacker then no defence is possible against DPA but I wanted to = check this anyway.

My conclusion from this is that= (2) is preferable to (1) because it is simpler and more efficient (it has = one less SHA256 compression run) and no less secure against DPA (in this mo= del). This is not really my area so perhaps there is a justification for (1= ) over (2) that I don't understand yet. If so, someone needs to write i= t down! If not then I think changing the proposal to (2) is preferable.
=

Cheers,

LL

--000000000000961c4905a19958f6--