summaryrefslogtreecommitdiff
path: root/69/b0aff8aaad7b7170434da1081de577b9b37e0f
blob: 9e133a41493086f03d81f36df27c3b06793e6c98 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
Return-Path: <fresheneesz@gmail.com>
Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133])
 by lists.linuxfoundation.org (Postfix) with ESMTP id C53F5C002D
 for <bitcoin-dev@lists.linuxfoundation.org>;
 Sun,  1 May 2022 22:42:03 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by smtp2.osuosl.org (Postfix) with ESMTP id B269140275
 for <bitcoin-dev@lists.linuxfoundation.org>;
 Sun,  1 May 2022 22:42:03 +0000 (UTC)
X-Virus-Scanned: amavisd-new at osuosl.org
X-Spam-Flag: NO
X-Spam-Score: -2.098
X-Spam-Level: 
X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5
 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1,
 DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001,
 HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001,
 SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: smtp2.osuosl.org (amavisd-new);
 dkim=pass (2048-bit key) header.d=gmail.com
Received: from smtp2.osuosl.org ([127.0.0.1])
 by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 30eeFDwEtTyo
 for <bitcoin-dev@lists.linuxfoundation.org>;
 Sun,  1 May 2022 22:42:02 +0000 (UTC)
X-Greylist: whitelisted by SQLgrey-1.8.0
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [IPv6:2607:f8b0:4864:20::636])
 by smtp2.osuosl.org (Postfix) with ESMTPS id 6767540260
 for <bitcoin-dev@lists.linuxfoundation.org>;
 Sun,  1 May 2022 22:42:02 +0000 (UTC)
Received: by mail-pl1-x636.google.com with SMTP id c23so11282939plo.0
 for <bitcoin-dev@lists.linuxfoundation.org>;
 Sun, 01 May 2022 15:42:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=gattjx1nK5RZ9YR3OVk9538uZcy7prrLbXZjq4rt+G8=;
 b=fsSHSd84qOKL2mnSMHTTqaVwVl6+CEGXVIJb8bZQoodZNBlnmOasM+pokHU2jeRPHQ
 gXjis1laZhyqxxB6Y7bWARGskLCjSHSJTxlwXRz5xggbS2ypvHRO6aYU8cxyCl7NOMxU
 wy8EIB/q0D3NbDO4bSeJt8lzINUPbTUdqSY62je7jmIH+TJ/x+DcFeKMay4NTy3QnAC/
 oPVDdTZ7F1nwAwv1lDpG9GEy07wgtCtsvdcDuMVMQeuFGRYFsCjRDetI38a1jLp+I4n5
 zpLyX+drH2p5ZMBHYabupUM+su5jQp0kjKP0Xf2NLMCUoazNE64xlPZiKnyMDyNpO4MR
 +n0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20210112;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=gattjx1nK5RZ9YR3OVk9538uZcy7prrLbXZjq4rt+G8=;
 b=BgZUN5NgvkoJGJ7Ih8t0k/Vr+MZLcZT3eOQ2BZrUTaQGg+SRpIOl8G3Wijhr4VCcG/
 XQkfwJ70zWZzr0oBzdAKdBaEE2oFATs5SoGNq6Bmqo1hicbihnSQ4xhJ/W3/rHYMZEA3
 R7282hiU/djsEIVCS6p9KKun3+qqyFjPil2vRwF2wp5r2IKgPxCHp/ijPUKb01U5OuuJ
 Tltz0EGUCAjonbL0MZJ8xn3vFdyNIvfz6htY9Ai59cjPl3/CduvMNTMzUFcLEs6uTPwr
 kbgg6qJuQpSDHMdo5PgnX4jvfxJHX2k62GwcAp+Qz9+0ZI9ZNUxLtMMBHuGLg0GGkIU7
 it+w==
X-Gm-Message-State: AOAM5304SqEElhdjz0A6j3CkzDlptDW9zc1ocjDJdiutgS50GMgqcsId
 SvayvjQzfgrQaY86DeYX+aXTlK4qI86IgWxvdTg=
X-Google-Smtp-Source: ABdhPJwNPUK+PkQvuWeYpntfBbdaFo+HHuvTVN/DBDQMpQA0zBtfXN02fVX17nnXp0Zt1XubbSDBN5Ic5w8PA8XF5eA=
X-Received: by 2002:a17:902:b7ca:b0:15c:df6a:be86 with SMTP id
 v10-20020a170902b7ca00b0015cdf6abe86mr9013426plz.70.1651444921594; Sun, 01
 May 2022 15:42:01 -0700 (PDT)
MIME-Version: 1.0
References: <CALeFGL2Orc6F567Wd9x7o1c5OPyLTV-RTqTmEBrGNbEz+oPaOQ@mail.gmail.com>
 <CABaSBayKH__f_ahUUiDt2SiKik9aNLR1AXtG9RtWrFmTLP5qKw@mail.gmail.com>
 <CAGpPWDbYj4+g4VPMT9FPqyUZWO+U98YQhgYan5fRqXjpd+dTyw@mail.gmail.com>
 <CAL5BAw1pKXh4HLrUQByVMwpUtYyWcE5JhjUP-JB_1HKkORB1dA@mail.gmail.com>
 <CAJowKg+-qy00X_nSvFDz0HtvfjdsaozzGq4Vr8Vbd06GGZ8k_A@mail.gmail.com>
 <CAGpPWDaDRROKQdQ0WcK-RHo5=dQL6tD=LcQbqfS6p8ZEWkpEmA@mail.gmail.com>
 <CAJowKgL5kgWkSB=8ioFkfCxmRJLif-P4VSvX04Ubz_h8A3XYtA@mail.gmail.com>
 <CAGpPWDZ_3gffJsdofpLQDg5F6Qg03G+5897SJENQEhVv0d-jrg@mail.gmail.com>
 <CAGpPWDaBcZfH=EhoSbsQHp5nKkJZheMPXudDkDjWX56n9PGB_A@mail.gmail.com>
 <CAGpPWDZqPcufktdNq5DGnpFH=u2VdQTFjJaHQiLaE7jwWhzPUQ@mail.gmail.com>
 <kfX31euUWC2GP3A1aUwRECN4R9G-hTAmB2sOrvmwnOT3ChmO4G1SOje88cTu53JZqHRw-3pjrQp3s8M5r8unxDlcClV62QZiW48t1NRa1J0=@protonmail.com>
In-Reply-To: <kfX31euUWC2GP3A1aUwRECN4R9G-hTAmB2sOrvmwnOT3ChmO4G1SOje88cTu53JZqHRw-3pjrQp3s8M5r8unxDlcClV62QZiW48t1NRa1J0=@protonmail.com>
From: Billy Tetrud <billy.tetrud@gmail.com>
Date: Sun, 1 May 2022 17:41:44 -0500
Message-ID: <CAGpPWDZ=XwgfAqx=H9sSHs1egwyFRyouV1dQR9R+027yiGTQSA@mail.gmail.com>
To: ZmnSCPxj <ZmnSCPxj@protonmail.com>
Content-Type: multipart/alternative; boundary="000000000000eaf72e05ddfafb1f"
X-Mailman-Approved-At: Mon, 02 May 2022 08:57:23 +0000
Cc: Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Towards a means of measuring user support for
	Soft Forks
X-BeenThere: bitcoin-dev@lists.linuxfoundation.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Bitcoin Protocol Discussion <bitcoin-dev.lists.linuxfoundation.org>
List-Unsubscribe: <https://lists.linuxfoundation.org/mailman/options/bitcoin-dev>, 
 <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=unsubscribe>
List-Archive: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/>
List-Post: <mailto:bitcoin-dev@lists.linuxfoundation.org>
List-Help: <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=help>
List-Subscribe: <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>, 
 <mailto:bitcoin-dev-request@lists.linuxfoundation.org?subject=subscribe>
X-List-Received-Date: Sun, 01 May 2022 22:42:03 -0000

--000000000000eaf72e05ddfafb1f
Content-Type: text/plain; charset="UTF-8"

>  if you are perfectly rational, you can certainly imagine a "what if"
where your goal is different from your current goal and figure out what you
would do ***if*** that were your goal instead.

I see what you're saying, and I'm a lot more on board with that. I still
think "rational" can't mean "perfect" - like "perfectly rational" is not
the same as "you magically get to the optimal answer". But I think my line
of thinking on this is a lot more pedantic than my previous contention. But
I will agree that for a given specific objective goal (that ignores other
goals), there is an objective set of answers that any logical person should
eventually be able to agree on. Of course, if there's any subjectivity in
the goal, then discussing the goal amongst two different people will really
mean that each of them are discussing slightly different goals, which
breaks the premise. So really for alignment to happen, the goal in question
needs to be really specific in order to remove any significant
subjectivity.

> better-than-human rationality

I like to think of rationality in the following way. Any economic actor is
a being that has goals they want to maximize for, and tools at their
disposal to analyze and affect their world. A rational actor is one that
attempts to use their tools to the best of their ability to maximize their
goals. Perhaps goals is a misleading word to use here, since it implies
something that can be achieved, whereas I really mean a set of weighted
metrics that can hypothetically always be improved upon. But in any case, a
human starts with goals built into their genetics, which in turn build
themselves into the structure of their body. The tools a human has is also
their body and their brain. The brain is not a perfect tool, and neither is
the rest of the body. However, humans use what they have to make decisions
and act on their world. The goals a human has evolve as they have
experiences in the world (which end up physically changing their brain). In
this sense, every human, and every possible actor really, must be a
rational actor. They're all doing the best they can, even if the tools at
their disposal are very suboptimal for maximizing their underlying goals.
What more can you ask of a rational actor than to use the tools they have
to achieve their goals?

So I don't think anyone is more or less "rational" than anyone else. They
just have different goals and different levels of ability to maximize those
goals. In my definition above, the goals are completely arbitrary. They
don't have to be anything in particular. A person could have the goal of
maximizing the number of paper clips in the world, at all other costs. This
would almost certainly be "bad" for that person, and "bad" for the world,
but if that's really what their goals are, then that "badness" is a
subjectivity that you and I would be placing on that goal because our goals
are completely different from it. To the being with that goal, it is a
totally perfect goal.

The idea that someone can be "more rational" than someone else kind of
boils everything down to one dimension. In reality, everyone has their
different skills and proficiencies. In a futures market, you might be
better at predicting the price of salmon, but you might be quite bad at
predicting human population changes over time. Does this mean you're "more
rational" about salmon but "less rational" about how human populations
change? I would say a better way to describe this is proficiency, rather
than rationality.

</digression>

> a future market

A futures market for predictions is an interesting idea. I haven't really
heard about such a thing really being done other than in little
experiments. Are you suggesting we use one to help make decisions about
bitcoin? One issue is that the questions a futures market answers have to,
like my conclusion in the above paragraph, be completely objective. So a
futures market can't answer the question "what's the best way to design
covenants?" tho it could answer the question "will CTV be activated by
2024?". But as a consequence, I don't think a future's market could help
much in the formulation of appropriate goals for bitcoin. That would need
to be hashed out by making a lot of different compromises amongst
everyone's various subjective opinion's about what is best.

And I think that's really what I'm suggesting here, is that we bitcoiners
discuss what the objective goals of bitcoin should be, or at least what
bounds on those goals there should be. And once we have these objective
goals, we can be aligned on how to appropriately solve them. It wouldn't
avoid the nashing of teeth needed to hash out the subjective parts of our
opinions in getting to those goals, but it could avoid much nashing of
teeth in the other half of the conversation: how to achieve the goals we
have reached consensus on.

Eg, should a goal of bitcoin be that 50% of the world's population should
require spending no more than 1% of their income to be able to run a full
node? Were we to decide on something akin to that, it would at least be a
question that has an objective truth value to it, even if we couldn't
feasibly confirm to 100% certainty whether we have achieved it, we could
probably confirm with some acceptable level of certainty below 100%.



On Sat, Apr 30, 2022 at 1:14 AM ZmnSCPxj <ZmnSCPxj@protonmail.com> wrote:

> Good morning Billy,
>
> > @Zman
> > > if two people are perfectly rational and start from the same
> information, they *will* agree
> > I take issue with this. I view the word "rational" to mean basically
> logical. Someone is rational if they advocate for things that are best for
> them. Two humans are not the same people. They have different circumstances
> and as a result different goals. Two actors with different goals will
> inevitably have things they rationally and logically disagree about. There
> is no universal rationality. Even an AI from outside space and time is
> incredibly likely to experience at least some value drift from its peers.
>
> Note that "the goal of this thing" is part of the information where both
> "start from" here.
>
> Even if you and I have different goals, if we both think about "given this
> goal, and these facts, is X the best solution available?" we will both
> agree, though our goals might not be the same as each other, or the same as
> "this goal" is in the sentence.
> What is material is simply that the laws of logic are universal and if you
> include the goal itself as part of the question, you will reach the same
> conclusion --- but refuse to act on it (and even oppose it) because the
> goal is not your own goal.
>
> E.g. "What is the best way to kill a person without getting caught?" will
> probably have us both come to the same broad conclusion, but I doubt either
> of us has a goal or sub-goal to kill a person.
> That is: if you are perfectly rational, you can certainly imagine a "what
> if" where your goal is different from your current goal and figure out what
> you would do ***if*** that were your goal instead.
>
> Is that better now?
>
> > > 3. Can we actually have the goals of all humans discussing this topic
> all laid out, *accurately*?
> > I think this would be a very useful exercise to do on a regular basis.
> This conversation is a good example, but conversations like this are rare.
> I tried to discuss some goals we might want bitcoin to have in a paper I
> wrote about throughput bottlenecks. Coming to a consensus around goals, or
> at very least identifying various competing groupings of goals would be
> quite useful to streamline conversations and to more effectively share
> ideas.
>
>
> Using a future market has the attractive property that, since money is
> often an instrumental sub-goal to achieve many of your REAL goals, you can
> get reasonably good information on the goals of people without them having
> to actually reveal their actual goals.
> Also, irrationality on the market tends to be punished over time, and a
> human who achieves better-than-human rationality can gain quite a lot of
> funds on the market, thus automatically re-weighing their thoughts higher.
>
> However, persistent irrationalities embedded in the design of the human
> mind will still be difficult to break (it is like a program attempting to
> escape a virtual machine).
> And an uninformed market is still going to behave pretty much randomly.
>
> Regards,
> ZmnSCPxj
>

--000000000000eaf72e05ddfafb1f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">&gt;=C2=A0

if you are perfectly rational, you can certainly imagine a &quot;what if&qu=
ot; where your goal is different from your current goal and figure out what=
 you would do ***if*** that were your goal instead.<div><br></div><div>I se=
e what you&#39;re saying, and I&#39;m a lot more on board with that. I stil=
l think &quot;rational&quot; can&#39;t mean &quot;perfect&quot; - like &quo=
t;perfectly rational&quot; is not the same as &quot;you magically get to th=
e optimal answer&quot;. But I think my line of thinking on this is a lot mo=
re pedantic than my previous contention. But I will agree that for a given =
specific objective goal (that ignores other goals), there is an objective s=
et of answers that any logical person should eventually be able to agree on=
. Of course, if there&#39;s any subjectivity in the goal, then discussing t=
he goal amongst two different people will really mean that each of them are=
 discussing slightly different goals, which breaks the premise. So really f=
or alignment to happen, the goal in question needs to be really specific in=
 order to remove any significant subjectivity.=C2=A0</div><div><br></div><d=
iv>&gt; better-than-human rationality</div><div><br></div><div>I like to th=
ink of rationality in the following way. Any economic actor is a being that=
 has goals they want to maximize for, and tools at their disposal to analyz=
e and affect their world. A rational actor is one that attempts to use thei=
r tools to the best of their ability to maximize their goals. Perhaps goals=
 is a misleading word to use here, since it implies something that can be a=
chieved, whereas I really mean a set of weighted metrics that can hypotheti=
cally always be improved upon. But in any case, a human starts with goals b=
uilt into their genetics, which in turn build themselves into the structure=
 of their body. The tools a human has is also their body and their brain. T=
he brain is not a perfect tool, and neither is the rest of the body. Howeve=
r, humans use what they have to make decisions and act on their world. The =
goals a human has evolve as they have experiences in the world (which end u=
p physically changing their brain). In this sense, every human, and every p=
ossible actor really, must be a rational actor. They&#39;re all doing the b=
est they can, even if the tools at their disposal are very suboptimal for m=
aximizing their underlying goals. What more can you ask of a rational actor=
 than to use the tools they have to achieve their goals?=C2=A0</div><div><b=
r></div><div>So I don&#39;t think anyone is more or less &quot;rational&quo=
t; than anyone else. They just have different goals and different levels of=
 ability to maximize those goals. In my definition above, the goals are com=
pletely arbitrary. They don&#39;t have to be anything in particular. A pers=
on could have the goal of maximizing the number of paper clips in the world=
, at all other costs. This would almost certainly be &quot;bad&quot; for th=
at person, and &quot;bad&quot; for the world, but if that&#39;s really what=
 their goals=C2=A0are, then that &quot;badness&quot; is a subjectivity that=
 you and I would be placing on that goal because our goals are completely=
=C2=A0different from it. To the being with that goal, it is a totally perfe=
ct goal.=C2=A0</div><div><br></div><div>The idea that someone can be &quot;=
more rational&quot; than someone else kind of boils everything down to one =
dimension. In reality, everyone has their different skills and proficiencie=
s. In a futures market, you might be better at predicting the price of salm=
on, but you might be quite bad at predicting human population changes over =
time. Does this mean you&#39;re &quot;more rational&quot; about salmon but =
&quot;less rational&quot; about how human populations change? I would say a=
 better way to describe this is proficiency, rather than rationality.=C2=A0=
</div><div><br></div><div>&lt;/digression&gt;</div><div><br></div><div>&gt;=
 a future market</div><div><br></div><div>A futures market for predictions =
is an interesting idea. I haven&#39;t really heard about such a thing reall=
y being done other than in little experiments. Are you suggesting we use on=
e to help make decisions about bitcoin? One issue is that the questions a f=
utures market answers have to, like my conclusion in the above paragraph, b=
e completely objective. So a futures market can&#39;t answer the question &=
quot;what&#39;s the best way to design covenants?&quot; tho it could answer=
 the question &quot;will CTV be activated by 2024?&quot;. But as a conseque=
nce, I don&#39;t think a future&#39;s market could help much in the formula=
tion of appropriate goals for bitcoin. That would need to be hashed out by =
making a lot of different compromises amongst everyone&#39;s various subjec=
tive opinion&#39;s about=C2=A0what is best.=C2=A0</div><div><br></div><div>=
And I think that&#39;s really what I&#39;m suggesting here, is that we bitc=
oiners discuss what the objective goals of bitcoin should be, or at least w=
hat bounds on those goals there should be. And once we have these objective=
 goals, we can be aligned on how to appropriately solve them. It wouldn&#39=
;t avoid the nashing of teeth needed to hash out the subjective parts of ou=
r opinions in getting to those goals, but it could avoid much nashing of te=
eth in the other half of the conversation: how to achieve the goals we have=
 reached consensus on.</div><div><br></div><div>Eg, should a goal of bitcoi=
n be that 50% of the world&#39;s population should require spending no more=
 than 1% of their income to be able to run a full node? Were we to decide o=
n something akin to that, it would at least be a question that has an objec=
tive truth value to it, even if we couldn&#39;t feasibly confirm to 100% ce=
rtainty whether we have achieved it, we could probably confirm with some ac=
ceptable level of certainty below 100%.=C2=A0</div><div><br></div><div><br>=
</div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_=
attr">On Sat, Apr 30, 2022 at 1:14 AM ZmnSCPxj &lt;<a href=3D"mailto:ZmnSCP=
xj@protonmail.com" target=3D"_blank">ZmnSCPxj@protonmail.com</a>&gt; wrote:=
<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8=
ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Good morning Bi=
lly,<br>
<br>
&gt; @Zman<br>
&gt; &gt; if two people are perfectly rational and start from the same info=
rmation, they *will* agree<br>
&gt; I take issue with this. I view the word &quot;rational&quot; to mean b=
asically logical. Someone is rational if they advocate for things that are =
best for them. Two humans are not the same people. They have different circ=
umstances and as a result different goals. Two actors with different goals =
will inevitably have things they rationally and logically disagree about. T=
here is no universal rationality. Even an AI from outside space and time is=
 incredibly likely to experience at least some value drift from its peers.<=
br>
<br>
Note that &quot;the goal of this thing&quot; is part of the information whe=
re both &quot;start from&quot; here.<br>
<br>
Even if you and I have different goals, if we both think about &quot;given =
this goal, and these facts, is X the best solution available?&quot; we will=
 both agree, though our goals might not be the same as each other, or the s=
ame as &quot;this goal&quot; is in the sentence.<br>
What is material is simply that the laws of logic are universal and if you =
include the goal itself as part of the question, you will reach the same co=
nclusion --- but refuse to act on it (and even oppose it) because the goal =
is not your own goal.<br>
<br>
E.g. &quot;What is the best way to kill a person without getting caught?&qu=
ot; will probably have us both come to the same broad conclusion, but I dou=
bt either of us has a goal or sub-goal to kill a person.<br>
That is: if you are perfectly rational, you can certainly imagine a &quot;w=
hat if&quot; where your goal is different from your current goal and figure=
 out what you would do ***if*** that were your goal instead.<br>
<br>
Is that better now?<br>
<br>
&gt; &gt; 3. Can we actually have the goals of all humans discussing this t=
opic all laid out, *accurately*?<br>
&gt; I think this would be a very useful exercise to do on a regular basis.=
 This conversation is a good example, but conversations like this are rare.=
 I tried to discuss some goals we might want bitcoin to have in a paper I w=
rote about throughput bottlenecks. Coming to a consensus around goals, or a=
t very least identifying various competing groupings of goals would be quit=
e useful to streamline conversations and to more effectively share ideas.<b=
r>
<br>
<br>
Using a future market has the attractive property that, since money is ofte=
n an instrumental sub-goal to achieve many of your REAL goals, you can get =
reasonably good information on the goals of people without them having to a=
ctually reveal their actual goals.<br>
Also, irrationality on the market tends to be punished over time, and a hum=
an who achieves better-than-human rationality can gain quite a lot of funds=
 on the market, thus automatically re-weighing their thoughts higher.<br>
<br>
However, persistent irrationalities embedded in the design of the human min=
d will still be difficult to break (it is like a program attempting to esca=
pe a virtual machine).<br>
And an uninformed market is still going to behave pretty much randomly.<br>
<br>
Regards,<br>
ZmnSCPxj<br>
</blockquote></div>

--000000000000eaf72e05ddfafb1f--