Return-Path: Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id C53F5C002D for ; Sun, 1 May 2022 22:42:03 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id B269140275 for ; Sun, 1 May 2022 22:42:03 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -2.098 X-Spam-Level: X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 30eeFDwEtTyo for ; Sun, 1 May 2022 22:42:02 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by smtp2.osuosl.org (Postfix) with ESMTPS id 6767540260 for ; Sun, 1 May 2022 22:42:02 +0000 (UTC) Received: by mail-pl1-x636.google.com with SMTP id c23so11282939plo.0 for ; Sun, 01 May 2022 15:42:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gattjx1nK5RZ9YR3OVk9538uZcy7prrLbXZjq4rt+G8=; b=fsSHSd84qOKL2mnSMHTTqaVwVl6+CEGXVIJb8bZQoodZNBlnmOasM+pokHU2jeRPHQ gXjis1laZhyqxxB6Y7bWARGskLCjSHSJTxlwXRz5xggbS2ypvHRO6aYU8cxyCl7NOMxU wy8EIB/q0D3NbDO4bSeJt8lzINUPbTUdqSY62je7jmIH+TJ/x+DcFeKMay4NTy3QnAC/ oPVDdTZ7F1nwAwv1lDpG9GEy07wgtCtsvdcDuMVMQeuFGRYFsCjRDetI38a1jLp+I4n5 zpLyX+drH2p5ZMBHYabupUM+su5jQp0kjKP0Xf2NLMCUoazNE64xlPZiKnyMDyNpO4MR +n0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gattjx1nK5RZ9YR3OVk9538uZcy7prrLbXZjq4rt+G8=; b=BgZUN5NgvkoJGJ7Ih8t0k/Vr+MZLcZT3eOQ2BZrUTaQGg+SRpIOl8G3Wijhr4VCcG/ XQkfwJ70zWZzr0oBzdAKdBaEE2oFATs5SoGNq6Bmqo1hicbihnSQ4xhJ/W3/rHYMZEA3 R7282hiU/djsEIVCS6p9KKun3+qqyFjPil2vRwF2wp5r2IKgPxCHp/ijPUKb01U5OuuJ Tltz0EGUCAjonbL0MZJ8xn3vFdyNIvfz6htY9Ai59cjPl3/CduvMNTMzUFcLEs6uTPwr kbgg6qJuQpSDHMdo5PgnX4jvfxJHX2k62GwcAp+Qz9+0ZI9ZNUxLtMMBHuGLg0GGkIU7 it+w== X-Gm-Message-State: AOAM5304SqEElhdjz0A6j3CkzDlptDW9zc1ocjDJdiutgS50GMgqcsId SvayvjQzfgrQaY86DeYX+aXTlK4qI86IgWxvdTg= X-Google-Smtp-Source: ABdhPJwNPUK+PkQvuWeYpntfBbdaFo+HHuvTVN/DBDQMpQA0zBtfXN02fVX17nnXp0Zt1XubbSDBN5Ic5w8PA8XF5eA= X-Received: by 2002:a17:902:b7ca:b0:15c:df6a:be86 with SMTP id v10-20020a170902b7ca00b0015cdf6abe86mr9013426plz.70.1651444921594; Sun, 01 May 2022 15:42:01 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Billy Tetrud Date: Sun, 1 May 2022 17:41:44 -0500 Message-ID: To: ZmnSCPxj Content-Type: multipart/alternative; boundary="000000000000eaf72e05ddfafb1f" X-Mailman-Approved-At: Mon, 02 May 2022 08:57:23 +0000 Cc: Bitcoin Protocol Discussion Subject: Re: [bitcoin-dev] Towards a means of measuring user support for Soft Forks X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 May 2022 22:42:03 -0000 --000000000000eaf72e05ddfafb1f Content-Type: text/plain; charset="UTF-8" > if you are perfectly rational, you can certainly imagine a "what if" where your goal is different from your current goal and figure out what you would do ***if*** that were your goal instead. I see what you're saying, and I'm a lot more on board with that. I still think "rational" can't mean "perfect" - like "perfectly rational" is not the same as "you magically get to the optimal answer". But I think my line of thinking on this is a lot more pedantic than my previous contention. But I will agree that for a given specific objective goal (that ignores other goals), there is an objective set of answers that any logical person should eventually be able to agree on. Of course, if there's any subjectivity in the goal, then discussing the goal amongst two different people will really mean that each of them are discussing slightly different goals, which breaks the premise. So really for alignment to happen, the goal in question needs to be really specific in order to remove any significant subjectivity. > better-than-human rationality I like to think of rationality in the following way. Any economic actor is a being that has goals they want to maximize for, and tools at their disposal to analyze and affect their world. A rational actor is one that attempts to use their tools to the best of their ability to maximize their goals. Perhaps goals is a misleading word to use here, since it implies something that can be achieved, whereas I really mean a set of weighted metrics that can hypothetically always be improved upon. But in any case, a human starts with goals built into their genetics, which in turn build themselves into the structure of their body. The tools a human has is also their body and their brain. The brain is not a perfect tool, and neither is the rest of the body. However, humans use what they have to make decisions and act on their world. The goals a human has evolve as they have experiences in the world (which end up physically changing their brain). In this sense, every human, and every possible actor really, must be a rational actor. They're all doing the best they can, even if the tools at their disposal are very suboptimal for maximizing their underlying goals. What more can you ask of a rational actor than to use the tools they have to achieve their goals? So I don't think anyone is more or less "rational" than anyone else. They just have different goals and different levels of ability to maximize those goals. In my definition above, the goals are completely arbitrary. They don't have to be anything in particular. A person could have the goal of maximizing the number of paper clips in the world, at all other costs. This would almost certainly be "bad" for that person, and "bad" for the world, but if that's really what their goals are, then that "badness" is a subjectivity that you and I would be placing on that goal because our goals are completely different from it. To the being with that goal, it is a totally perfect goal. The idea that someone can be "more rational" than someone else kind of boils everything down to one dimension. In reality, everyone has their different skills and proficiencies. In a futures market, you might be better at predicting the price of salmon, but you might be quite bad at predicting human population changes over time. Does this mean you're "more rational" about salmon but "less rational" about how human populations change? I would say a better way to describe this is proficiency, rather than rationality. > a future market A futures market for predictions is an interesting idea. I haven't really heard about such a thing really being done other than in little experiments. Are you suggesting we use one to help make decisions about bitcoin? One issue is that the questions a futures market answers have to, like my conclusion in the above paragraph, be completely objective. So a futures market can't answer the question "what's the best way to design covenants?" tho it could answer the question "will CTV be activated by 2024?". But as a consequence, I don't think a future's market could help much in the formulation of appropriate goals for bitcoin. That would need to be hashed out by making a lot of different compromises amongst everyone's various subjective opinion's about what is best. And I think that's really what I'm suggesting here, is that we bitcoiners discuss what the objective goals of bitcoin should be, or at least what bounds on those goals there should be. And once we have these objective goals, we can be aligned on how to appropriately solve them. It wouldn't avoid the nashing of teeth needed to hash out the subjective parts of our opinions in getting to those goals, but it could avoid much nashing of teeth in the other half of the conversation: how to achieve the goals we have reached consensus on. Eg, should a goal of bitcoin be that 50% of the world's population should require spending no more than 1% of their income to be able to run a full node? Were we to decide on something akin to that, it would at least be a question that has an objective truth value to it, even if we couldn't feasibly confirm to 100% certainty whether we have achieved it, we could probably confirm with some acceptable level of certainty below 100%. On Sat, Apr 30, 2022 at 1:14 AM ZmnSCPxj wrote: > Good morning Billy, > > > @Zman > > > if two people are perfectly rational and start from the same > information, they *will* agree > > I take issue with this. I view the word "rational" to mean basically > logical. Someone is rational if they advocate for things that are best for > them. Two humans are not the same people. They have different circumstances > and as a result different goals. Two actors with different goals will > inevitably have things they rationally and logically disagree about. There > is no universal rationality. Even an AI from outside space and time is > incredibly likely to experience at least some value drift from its peers. > > Note that "the goal of this thing" is part of the information where both > "start from" here. > > Even if you and I have different goals, if we both think about "given this > goal, and these facts, is X the best solution available?" we will both > agree, though our goals might not be the same as each other, or the same as > "this goal" is in the sentence. > What is material is simply that the laws of logic are universal and if you > include the goal itself as part of the question, you will reach the same > conclusion --- but refuse to act on it (and even oppose it) because the > goal is not your own goal. > > E.g. "What is the best way to kill a person without getting caught?" will > probably have us both come to the same broad conclusion, but I doubt either > of us has a goal or sub-goal to kill a person. > That is: if you are perfectly rational, you can certainly imagine a "what > if" where your goal is different from your current goal and figure out what > you would do ***if*** that were your goal instead. > > Is that better now? > > > > 3. Can we actually have the goals of all humans discussing this topic > all laid out, *accurately*? > > I think this would be a very useful exercise to do on a regular basis. > This conversation is a good example, but conversations like this are rare. > I tried to discuss some goals we might want bitcoin to have in a paper I > wrote about throughput bottlenecks. Coming to a consensus around goals, or > at very least identifying various competing groupings of goals would be > quite useful to streamline conversations and to more effectively share > ideas. > > > Using a future market has the attractive property that, since money is > often an instrumental sub-goal to achieve many of your REAL goals, you can > get reasonably good information on the goals of people without them having > to actually reveal their actual goals. > Also, irrationality on the market tends to be punished over time, and a > human who achieves better-than-human rationality can gain quite a lot of > funds on the market, thus automatically re-weighing their thoughts higher. > > However, persistent irrationalities embedded in the design of the human > mind will still be difficult to break (it is like a program attempting to > escape a virtual machine). > And an uninformed market is still going to behave pretty much randomly. > > Regards, > ZmnSCPxj > --000000000000eaf72e05ddfafb1f Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
>=C2=A0 if you are perfectly rational, you can certainly imagine a "what if&qu= ot; where your goal is different from your current goal and figure out what= you would do ***if*** that were your goal instead.

I se= e what you're saying, and I'm a lot more on board with that. I stil= l think "rational" can't mean "perfect" - like &quo= t;perfectly rational" is not the same as "you magically get to th= e optimal answer". But I think my line of thinking on this is a lot mo= re pedantic than my previous contention. But I will agree that for a given = specific objective goal (that ignores other goals), there is an objective s= et of answers that any logical person should eventually be able to agree on= . Of course, if there's any subjectivity in the goal, then discussing t= he goal amongst two different people will really mean that each of them are= discussing slightly different goals, which breaks the premise. So really f= or alignment to happen, the goal in question needs to be really specific in= order to remove any significant subjectivity.=C2=A0

> better-than-human rationality

I like to th= ink of rationality in the following way. Any economic actor is a being that= has goals they want to maximize for, and tools at their disposal to analyz= e and affect their world. A rational actor is one that attempts to use thei= r tools to the best of their ability to maximize their goals. Perhaps goals= is a misleading word to use here, since it implies something that can be a= chieved, whereas I really mean a set of weighted metrics that can hypotheti= cally always be improved upon. But in any case, a human starts with goals b= uilt into their genetics, which in turn build themselves into the structure= of their body. The tools a human has is also their body and their brain. T= he brain is not a perfect tool, and neither is the rest of the body. Howeve= r, humans use what they have to make decisions and act on their world. The = goals a human has evolve as they have experiences in the world (which end u= p physically changing their brain). In this sense, every human, and every p= ossible actor really, must be a rational actor. They're all doing the b= est they can, even if the tools at their disposal are very suboptimal for m= aximizing their underlying goals. What more can you ask of a rational actor= than to use the tools they have to achieve their goals?=C2=A0
So I don't think anyone is more or less "rational&quo= t; than anyone else. They just have different goals and different levels of= ability to maximize those goals. In my definition above, the goals are com= pletely arbitrary. They don't have to be anything in particular. A pers= on could have the goal of maximizing the number of paper clips in the world= , at all other costs. This would almost certainly be "bad" for th= at person, and "bad" for the world, but if that's really what= their goals=C2=A0are, then that "badness" is a subjectivity that= you and I would be placing on that goal because our goals are completely= =C2=A0different from it. To the being with that goal, it is a totally perfe= ct goal.=C2=A0

The idea that someone can be "= more rational" than someone else kind of boils everything down to one = dimension. In reality, everyone has their different skills and proficiencie= s. In a futures market, you might be better at predicting the price of salm= on, but you might be quite bad at predicting human population changes over = time. Does this mean you're "more rational" about salmon but = "less rational" about how human populations change? I would say a= better way to describe this is proficiency, rather than rationality.=C2=A0=

</digression>

>= a future market

A futures market for predictions = is an interesting idea. I haven't really heard about such a thing reall= y being done other than in little experiments. Are you suggesting we use on= e to help make decisions about bitcoin? One issue is that the questions a f= utures market answers have to, like my conclusion in the above paragraph, b= e completely objective. So a futures market can't answer the question &= quot;what's the best way to design covenants?" tho it could answer= the question "will CTV be activated by 2024?". But as a conseque= nce, I don't think a future's market could help much in the formula= tion of appropriate goals for bitcoin. That would need to be hashed out by = making a lot of different compromises amongst everyone's various subjec= tive opinion's about=C2=A0what is best.=C2=A0

= And I think that's really what I'm suggesting here, is that we bitc= oiners discuss what the objective goals of bitcoin should be, or at least w= hat bounds on those goals there should be. And once we have these objective= goals, we can be aligned on how to appropriately solve them. It wouldn'= ;t avoid the nashing of teeth needed to hash out the subjective parts of ou= r opinions in getting to those goals, but it could avoid much nashing of te= eth in the other half of the conversation: how to achieve the goals we have= reached consensus on.

Eg, should a goal of bitcoi= n be that 50% of the world's population should require spending no more= than 1% of their income to be able to run a full node? Were we to decide o= n something akin to that, it would at least be a question that has an objec= tive truth value to it, even if we couldn't feasibly confirm to 100% ce= rtainty whether we have achieved it, we could probably confirm with some ac= ceptable level of certainty below 100%.=C2=A0


=

On Sat, Apr 30, 2022 at 1:14 AM ZmnSCPxj <ZmnSCPxj@protonmail.com> wrote:=
Good morning Bi= lly,

> @Zman
> > if two people are perfectly rational and start from the same info= rmation, they *will* agree
> I take issue with this. I view the word "rational" to mean b= asically logical. Someone is rational if they advocate for things that are = best for them. Two humans are not the same people. They have different circ= umstances and as a result different goals. Two actors with different goals = will inevitably have things they rationally and logically disagree about. T= here is no universal rationality. Even an AI from outside space and time is= incredibly likely to experience at least some value drift from its peers.<= br>
Note that "the goal of this thing" is part of the information whe= re both "start from" here.

Even if you and I have different goals, if we both think about "given = this goal, and these facts, is X the best solution available?" we will= both agree, though our goals might not be the same as each other, or the s= ame as "this goal" is in the sentence.
What is material is simply that the laws of logic are universal and if you = include the goal itself as part of the question, you will reach the same co= nclusion --- but refuse to act on it (and even oppose it) because the goal = is not your own goal.

E.g. "What is the best way to kill a person without getting caught?&qu= ot; will probably have us both come to the same broad conclusion, but I dou= bt either of us has a goal or sub-goal to kill a person.
That is: if you are perfectly rational, you can certainly imagine a "w= hat if" where your goal is different from your current goal and figure= out what you would do ***if*** that were your goal instead.

Is that better now?

> > 3. Can we actually have the goals of all humans discussing this t= opic all laid out, *accurately*?
> I think this would be a very useful exercise to do on a regular basis.= This conversation is a good example, but conversations like this are rare.= I tried to discuss some goals we might want bitcoin to have in a paper I w= rote about throughput bottlenecks. Coming to a consensus around goals, or a= t very least identifying various competing groupings of goals would be quit= e useful to streamline conversations and to more effectively share ideas.

Using a future market has the attractive property that, since money is ofte= n an instrumental sub-goal to achieve many of your REAL goals, you can get = reasonably good information on the goals of people without them having to a= ctually reveal their actual goals.
Also, irrationality on the market tends to be punished over time, and a hum= an who achieves better-than-human rationality can gain quite a lot of funds= on the market, thus automatically re-weighing their thoughts higher.

However, persistent irrationalities embedded in the design of the human min= d will still be difficult to break (it is like a program attempting to esca= pe a virtual machine).
And an uninformed market is still going to behave pretty much randomly.

Regards,
ZmnSCPxj
--000000000000eaf72e05ddfafb1f--