From: hal@finney.org
Date: Tue Aug 17 1999 - 18:26:13 MDT
Dan Fabulich, <daniel.fabulich@yale.edu>, writes:
[Useful info from MW FAQ]
> I have to agree with Moss, however, who notes that the presence of
> conventional detectors in Clark's quantum computer scenario would ruin the
> test, since conventional detectors would fail to completely revert when
> the quantum computer erased its memory.
I think John accidentally misstated the test; the detectors must not be
conventional, but rather must preserve quantum coherence.
> As for the likelihood of this test ever occuring, the odds seem extremely
> remote to me that we'll have reversible AI by 2040. Certainly we'll have
> some kind of AI in the next 50 years, but I strongly doubt that it'll be
> thermodynamically reversible.
Actually, if we get nanotech, it is likely that there will be work on
making it thermodynamically reversible. It is very striking to read in
Nanosystems how thermodynamic issues of physical entropy can become the
dominant factor in heat dissipation in certain mechanisms. Whether it
can be reversible enough to maintain quantum coherence is another issue,
though. Maybe it could work at ultra-cold temperatures.
Also, I think that quantum computers, if they work, would be inherently
reversible, due to the fundamental properties of QM. So if they get
quantum computers big enough to run an AI, this would probably be enough
to run Deutsch's experiment (big IF there).
> Unfortunately, without reversible AI, the proponents of the Copenhagen
> interpretation could plausibly argue that since there was no "real
> observer," no real observation took place. Such a proponent would be hard
> pressed to define what a "real observer" is, but no more so than today.
Worse, I suspect that even if this experiment was run, there might be
those who would argue that AIs aren't really conscious, so it doesn't
count. And further, they might actually use the result of this experiment
to say that the nature of AI consciousness is fundamentally different
from that of biological mechanisms. AIs can get into these funny kind
of Schrodinger Cat superpositions where they believe two contradictory
things at once, but people can't, and this could be used to claim that
consciousness means something different for AIs.
Hal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:48 MST