Eliezer writes:
> The first law of thermodynamics states that "You can't win"; you cannot
> decrease the amount of entropy in the Universe.
Actually I think this is the second law; the first law is conservation
of energy. However in some of your perpetual motion machines you seem
to be violating the first law more than the second.
> Hence, the first law of thermodynamics. A high-entropy system occupies a
> large volume of phase space. A hot piece of metal occupies a larger
> volume of phase space than a cold piece of metal, since the range of
> possible velocities for a particle is larger. A physical process which
> turned a hot piece of metal into a cold piece of metal plus electricity -
> leaving the rest of the Universe constant - would be a process that turned
> a large area of phase space into a small area, violating the theorem.
Assuming, as is reasonable, that the electricity had only a small
phase space.
> The first type of perpetual motion machine, the negative-energy method,
> says that you can manufacture X amount of negative matter, X amount of
> positive matter, pour all your waste heat into the newly created matter,
> and then annihilate it, getting rid of the waste heat. Each time a
> negative particle and a positive particle come into existence, it changes
> the total volume of the Hamiltonian phase space - by adding entire
> dimensions, in fact.
Of course, in classical thermodynamics there is no such thing as negative
matter, and if there were I'm not sure if it would have negative entropy.
However it would arguably have negative energy, so if you were worried
about satisfying the first (not second) law of thermodynamics, this part
could in principle work if negative matter were real.
However I'm not sure it makes sense to explain that the whole idea works
because of increasing the volume of phase space due to adding particles.
If all it took to violate thermodynamic laws were to change the number
of particles, it would happen all the time. Chemical reactions change
the number of molecules, which is often what we count in thermodynamics,
and nuclear reactions can change the number of subatomic particles.
> When you annihilate the particles, the total volume
> shrinks again. In effect, the Type One perpetual motion machine increases
> the total volume of the Universe's Hamiltonian, which means that you can
> choose to wind up in a smaller area (relatively speaking) of that larger
> Hamiltonian. Then you shrink the Hamiltonian back down again by
> annihilating the matter, but you do so in a way which means that you end
> up in a smaller area of your own Hamiltonian. In other words, when you
> increase the size of the Hamiltonian, you perform a one-to-one
> transformation of your phase space into that Hamiltonian; then, when the
> Hamiltonian shrinks, you perform a many-to-one transformation; taken as a
> complete operation, this shrinks the size of your phase space.
I don't think this part will work. Keep in mind that systems don't
actually occupy volumes of phase space, they occupy points. Think of
families of systems, each at a different point in the phase space that
makes up the volume. To perform a transformation which shrinks the phase
space means that some initially distinct members of that family have now
merged and are identical. This requires nonreversible transformations,
and I don't see anything here which is nonreversible.
> The second type of perpetual motion machine, the quantum, notes that
> state-vector reduction again reduces the volume of phase space. A large
> volume of phase space, describing the probability amplitude that an
> electron is present at all points of space, collapses into a single point
> which describes the electron as being present at a single point in space.
> State-vector reduction takes zillions of possible superposed Universes and
> annihilates all but one of them. Thus, it may be possible to build a
> quantum perpetual motion machine in which the amplitudes "cold states"
> tend to add up while the amplitudes of "hot states" cancel out;
> effectively, this dumps waste heat into a superposed state that gets
> blipped out of existence when the quantum collapse occurs.
If you think of a many worlds model, any such attempt would not actually
blip any other universes out of existence, but rather it simply allows
you to learn where you are in the multiverse. Learning where you are
can't change the overall statistics of what happens. So I think this
blipping-out is a bad model and misleading about what could happen.
> The third type of perpetual motion machine, the temporal, says that you
> can violate the first law using a time machine. If you take a heat bath
> and watch it evolving through states for a sufficiently long time, you can
> decide to stop watching at a point where all the atoms on one side happen
> to be moving in the same direction. The volume of phase space has been
> preserved at all points within the temporal loop, but from the perspective
> of the rest of the Universe, the heat bath rejoins our world only when it
> occupies a particular volume of phase space. Pieces of a sufficiently
> large physical system will show temporary decreases in entropy due to the
> operation of normal, random mechanisms; a time machine lets you take all
> the temporary decreases, unsynchronize them temporally, and resynchronize
> them so that they all add together. In other words, the phase space
> remains constant if you evolve *all* of it for N minutes, but if you
> evolve some of it for N minutes and some of it for M minutes, the new
> volumes may overlap; the total volume may not be constant.
Maxwell's Demon can accomplish the same task and he doesn't even have
a time machine, he just opens the door at the right time. It's the
measurement process that defeats him, and I suspect the same thing will
happen to you. Knowing when to stop time requires measuring the atoms to
know when they are all moving together. That is costly. (Technically it
is in the erasure of the measurements that we account for the cost.)
> In saying that a dropped glass will fall downwards, we are making a
> statement that one volume of phase space - the phase space of dropped
> glasses - will evolve into another phase space; the phase space of glasses
> lying on the floor. Both volumes of phase space are extremely large, but
> they are relatively compact, and a point in the core volume of the first
> phase space ends up in the core volume of the second phase space, *most of
> the time* - they are fuzzy, but not very fuzzy. The phase spaces are
> compact and their evolution is compact, but will never be perfectly
> compact in an inperfect Universe. Watch long enough, and you'll see
> glasses falling upwards.
It's not that the phase spaces of falling-glass and dropped-glass are
fuzzy; they may be perfectly well defined. Rather, not all of the first
volume ends up within the second. A little bit is outside, representing
dropped glasses which don't break.
> This phenomenon is what people refer to when they say that perfectly
> rational thought is impossible.
Where'd this come from? I've never heard anyone say this, at least
not in those words. A google search for "perfectly rational thought"
finds no such claim. (Of course most people would agree that any form
of perfection is unatttainable.)
> Turing's diagonalization theorem can be expressed as follows: A mind
> cannot perfectly describe itself because there is always, inescapably,
> some part of the mind which at that moment is observing and is not itself
> being observed. The observer is always smaller than the observed, and
> thus cannot perfectly describe it.
I'm not convinced of this argument. It sounds like it could equally well
"prove" that self-reproducing automata are impossible, because they have
to have a model of themselves, and the model is always smaller than
the total automaton, hence there must be part that can't be modelled.
When people first go to write a self reproducing program they run into
this, and in some cases they may conclude that it is impossible, if
they don't stumble onto the trick. Yet von Neumann showed and biology
confirms that self reproduction is entirely possible.
I'd say the reason is that the model is isomorphic (via some mapping)
to the system minus the model, and so adding the model does not increase
the information in the system.
Couldn't a mind's mental model share the same property?
> Suppose
> that your version of the diagonalization argument is to ask the AI: "Will
> the AI you're simulating answer 'No' to this question?" Ve knows that, as
> soon as ve decides to answer "Yes, it will say No" or "No, it will say
> Yes", through whatever output mechanism has been provided, the AI ve's
> simulating will make the exact same answer, thus invalidating the
> response.
"AI, will you answer 'No' to this question?"
"Of course not."
I don't think this kind of trick question sheds much light. It's just
playing with words. An AI with an incomplete model is no more able to
answer the question than one with a complete model. An AI with a fully
complete model of itself, one which understands itself perfectly, can't
give a consistent or correct answer, but that doesn't indicate a lack
of understanding; it is simply that the question is a trick. In effect
it is being asked two questions at once: what is the next word you will
say, and will you say "no". There is no way to answer this question if
we stick to yes or no answers! The existence of unanswerable questions
fails to prove that beings can't fully understand themselves.
The problem with the analysis is that it assumes that the AI understands
itself in a very simplistic and mechanical manner. The only way it can
answer the question is by running its own program by brute force.
It's like the person who wants to write a self-reproducing program, who
starts out with:
main(){
putchar ('m');
putchar ('a');
putchar ('i');
putchar ('n');
putchar ('(');
putchar (')');
putchar ('{');
putchar ('\n');
putchar ('p');
putchar ('u');
putchar ('t');
...
He soon realizes that this isn't going to work. He's losing ground with
every statement he writes. The real solution takes an entirely different
approach.
In the same way, a perfectly self-understanding AI could have a more subtle
model of its own workings.
This is not necessarily a violation of the halting theorem. The AI does
not claim to be capable of telling whether arbitrary programs will halt.
It is probably not even Turing complete, in the sense that it can't
necessarily run any arbitrary program "in its head" (people certainly
can't).
In truth, the notion of self-understanding is not very well defined.
To continue this further it will be necessary to have a more rigorous
definition.
> Now, someone could still argue that even "effective perfection" is
> unattainable - that a transhuman will, of necessity, make factual errors
> or insane-class mistakes at least once a month - but if so, it will have
> to be an argument on grounds of cognitive science, rather than mathematics
> or computer science.
I doubt that anyone sensible would say such a thing about transhumans
in general. Estimating the mean time between errors for any complex
system requires a detailed specification. Once that is in hand, I would
imagine that the task would more likely be a matter of materials science
than cognitive.
Hal
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:31 MDT