Return-Path: Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 8EC6AC002D for ; Sat, 30 Apr 2022 06:14:58 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 672DA40D96 for ; Sat, 30 Apr 2022 06:14:58 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: 0.298 X-Spam-Level: X-Spam-Status: No, score=0.298 tagged_above=-999 required=5 tests=[BAYES_20=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, FROM_LOCAL_NOVOWEL=0.5, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=protonmail.com Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jMufDp-lx5XD for ; Sat, 30 Apr 2022 06:14:57 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 Received: from mail-40135.protonmail.ch (mail-40135.protonmail.ch [185.70.40.135]) by smtp2.osuosl.org (Postfix) with ESMTPS id 09CA440D8F for ; Sat, 30 Apr 2022 06:14:56 +0000 (UTC) Date: Sat, 30 Apr 2022 06:14:45 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com; s=protonmail2; t=1651299293; bh=WzA88TBaT1L3anckpioLKrzmULY3MgpFFI1FSn401+Q=; h=Date:To:From:Cc:Reply-To:Subject:Message-ID:In-Reply-To: References:Feedback-ID:From:To:Cc:Date:Subject:Reply-To: Feedback-ID:Message-ID; b=LCXu7hp69D/P+tnqKBJOP16jA6rkzQgwg+ouhy4Vz6hiBokP79ZfflyMI2ZFzDlBV t/7lVkuAL8Yxb2afw/kpk7pUDxwrLGeh4884bXwPkO2/Q2mjDvfczF2lvKZXWNvBKr XcRUSgYbXmYV2ARRYNpxSnz35EpfH7Ha3TwNjZTlSPQFEtJD4az1psqQ1bZkNDl3M1 3ulRgxhWsSf9hruTnnW3xcn2MEtC9qKCCobxexwtmCaDlPYZcVI4Grp8uL0xu6ZO8u 2O88vNr7CnRSVD0TZQsCEKoiKyuaTPgRbgEU8pczeHEM0GQYxktxJRH0QJnycMjcxO pJsrObDBoNf7Q== To: Billy Tetrud , Bitcoin Protocol Discussion From: ZmnSCPxj Reply-To: ZmnSCPxj Message-ID: In-Reply-To: References: Feedback-ID: 2872618:user:proton MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [bitcoin-dev] Towards a means of measuring user support for Soft Forks X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 30 Apr 2022 06:14:58 -0000 Good morning Billy, > @Zman > > if two people are perfectly rational and start from the same informatio= n, they *will* agree > I take issue with this. I view the word "rational" to mean basically logi= cal. Someone is rational if they advocate for things that are best for them= . Two humans are not the same people. They have different circumstances and= as a result different goals. Two actors with different goals will inevitab= ly have things they rationally and logically disagree about. There is no un= iversal rationality. Even an AI from outside space and time is incredibly l= ikely to experience at least some value drift from its peers. Note that "the goal of this thing" is part of the information where both "s= tart from" here. Even if you and I have different goals, if we both think about "given this = goal, and these facts, is X the best solution available?" we will both agre= e, though our goals might not be the same as each other, or the same as "th= is goal" is in the sentence. What is material is simply that the laws of logic are universal and if you = include the goal itself as part of the question, you will reach the same co= nclusion --- but refuse to act on it (and even oppose it) because the goal = is not your own goal. E.g. "What is the best way to kill a person without getting caught?" will p= robably have us both come to the same broad conclusion, but I doubt either = of us has a goal or sub-goal to kill a person. That is: if you are perfectly rational, you can certainly imagine a "what i= f" where your goal is different from your current goal and figure out what = you would do ***if*** that were your goal instead. Is that better now? > > 3. Can we actually have the goals of all humans discussing this topic a= ll laid out, *accurately*? > I think this would be a very useful exercise to do on a regular basis. Th= is conversation is a good example, but conversations like this are rare. I = tried to discuss some goals we might want bitcoin to have in a paper I wrot= e about throughput bottlenecks. Coming to a consensus around goals, or at v= ery least identifying various competing groupings of goals would be quite u= seful to streamline conversations and to more effectively share ideas. Using a future market has the attractive property that, since money is ofte= n an instrumental sub-goal to achieve many of your REAL goals, you can get = reasonably good information on the goals of people without them having to a= ctually reveal their actual goals. Also, irrationality on the market tends to be punished over time, and a hum= an who achieves better-than-human rationality can gain quite a lot of funds= on the market, thus automatically re-weighing their thoughts higher. However, persistent irrationalities embedded in the design of the human min= d will still be difficult to break (it is like a program attempting to esca= pe a virtual machine). And an uninformed market is still going to behave pretty much randomly. Regards, ZmnSCPxj