From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Sat Dec 11 2004 - 22:24:01 MST
O.K, my latest thought may actually qualify as the
weirdest argument ever to be posted on SL4 ;) I can’t
see any flaw in my argument, but I just thought of it
so I may be speaking nonsense here. Have a read and
see what you think. Obviously you need to make the
starting assumption that MWI of QM is true for the
argument to make sense.
Thinking about MWI of QM, it occurred to me that a
true altruist needs to consider the well being of
sentients in all the alternative QM branches, not just
this particular branch.
Now...suppose that something bad were to happen to
leading FAI researchers like Eli and Ben? Say they
were both hit by trucks. Then I think it's fair to
say that the chances of a successful Singularity would
be somewhat reduced. But what would the situation in
the multiverse as a whole be if we lost Eli and Ben?
Well, assuming that the human brain is a classical
computer and doesn't use quantum indeterminacy (a
pretty reasonable assumption), then it is likely that
the deaths of Eli and Ben would already be largely
determined by classical physics (at least in the QM
branches that diverged from this time forward). So if
we lost Eli and Ben in this QM branch, chances are
they would die in most of the other QM branches of the
multiverse as well. So if Eli and Ben were hit by
trucks here, and it was mostly classical physics at
work leading to their death (a likely assumption, as
mentioned) then they'd probably be dead in something
like 99% of all the other QM branches as well. It
would be bad news for the sentients in most other
branches of the multiverse.
I realized that there is a way to 're-distribute risk'
across the multiverse, so as to ensure that a minimum
fraction of alternative versions of Eli and Ben would
survive! As I mentioned a true altruist has to
consider the well being of sentients all the
alternative branches. It would be bad news for most
sentients in the multiverse if leading A.I researchers
were lost. Therefore altruist A.I researchers should
follow my 'Quantum Insurance Policy' in order to safe
guard the alternative versions of themselves!
Here's how it works. The reason why the deaths of
leading A.I researchers in this QM branch would cause
a problem across the multiverse (from this time
forward) is the assumption that largely classical
physics is at work in the human brain. So decisions
taken by the version of yourself here in this QM
branch are globally linked to decisions taken by all
the alternative versions of yourself across the
multiverse (in the time tracks that diverge from this
time forward). In short, given the reasonable
assumption that classical physics is largely at work
in your brain, if you do something dumb here then most
of the alternative versions of yourself have done the
same dumb thing. across the multiverse.
Here's how to safeguard some of the alternative
versions of yourself: simply base some of your
decisions on quantum random events. There are devices
that can easily generate quantum random numbers. For
instance at the web-site:
http://www.fourmilab.ch/hotbits/ you can get quantum
random numbers from a lab where radioactive decay is
used to generate them. Simply link some of your daily
decisions to these numbers. For instance at the
beginning of the day you might draw up a table saying:
I'll take this route to work if I get a quantum
heads, and that route to work if I get a quantum
tails.
See then what happens across the multiverse if leading
A.I researchers started to use this strategy: the high
correlations between alternative versions of
themselves across the multiverse are broken. The
effect of this is to' re-distribute' risk across the
multiverse, which actually works to ensure that some
minimum fraction of your alternative selves are
shielded from bad things happening. For instance
suppose Eliezer was hit by a truck walking to work.
Suppose he'd been linking the decision about which
route to walk to work to a 'quantum coin flip'. Then
half the alternative versions of himself would have
taken another route to work and avoided the truck. So
in 50% of QM branches he'd live on. Compare that to
the case where Eli's decision about which route to
walk to work was being made mostly according to
classical physics. If something bad happened to him
he'd be dead in say 99% of QM branches. The effect of
the quantum decision making is to re-distribute risk
across the multiverse. Therefore the altruist
strategy has to be to deploy the 'quantum decisions'
scheme to break the classical physics symmetry across
the multiverse.
In fact the scheme can be used to redistribute the
risk of Unfriendly A.I across the multiverse. There
is a certain probability that leading A.I researchers
will screw up and create Unfriendly A.I. Again, if
the human brain is largely operating off classical
physics, a dumb decision by an A.I researcher in this
QM branch is largely correlated with the same dumb
decision by alternative versions of that researcher in
all the QM branches divergent from that time on. As
an example: Let's say Ben Goertzel screwed up and
created and Unfriendly A.I because of a dumb decision.
The same thing happens in most of the alternative
branches if his decisions were caused by classical
physics! But suppose Ben had been deploying my
'quantum insurance scheme', whereby he had been basing
some of his daily decisions off quantum random
numbers. Then there would be more variation in the
alternative versions of Ben across the Multiverse. At
least some versions of Ben would be less likely to
make that dumb decision, and there would be an assured
minimum percentage of QM branches avoiding Unfriendly
A.I.
=====
"Live Free or Die, Death is not the Worst of Evils."
- Gen. John Stark
"The Universe...or nothing!"
-H.G.Wells
Please visit my web-sites.
Sci-Fi and Fantasy : http://www.prometheuscrack.com
Mathematics, Mind and Matter : http://www.riemannai.org
Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT