Re: Computability of consciousness

David Blenkinsop (blenl@sk.sympatico.ca)
Fri, 26 Mar 1999 22:21:55 -0600

Hal <hal@rain.org> wrote:

> We have three seemingly true but incompatible statements:
>
> A. It is a matter of interpretation whether a given system can be
> described as a computer running a specified program P
>
> B. Any conscious entity can be created ("instantiated") by running the
> proper program
>
> C. It is an objective matter whether a given conscious entity exists
>
. . .
>
>In my opinion this is the best way out of the dilemma. We must reject
>the notion that all possible computational descriptions of a system are
>equally valid. Kolmogorov complexity provides a way out.
>
>Hal

Hmmm, having never read up on the Kolmogorov complexity measure, I wonder, is this roughly comparable to *entropy*, as in maybe complexity is a more interestly structured kind of "messiness" or something like that? That would make a certain amount of sense to me, if consciousness gets an objective definition at least in part from fullfilling a math "touchstone" of minimal complexity, the Kolmogorov measure being comparable to entropy as a measure of randomness.

The problem with this is, if you look at entropy as such, the true definition of it is apparently quite dependent on the knowledge of any given observer (notwithstanding that thermodynamics texts often depict entropy as a strictly objective measure of some quantity "contained" in a system). In contrast to the common, objective, idea of entropy, Chapter 4 of Drexler's _Nanosystems_ text talks about a system's entropy being dependent on the knowledge that an outside observer happens to have. Basically, highly "orderly" systems, like crystals, are "orderly" just because it's so much easier for a scientist to measure and write down an accurate expression of where all the crystal's atoms are supposed to be, this is in contrast to a high entropy system like a gas, where measurements leave a lot of uncertainty about exactly where the atoms are. In other words, the entropy is not really a local, objective, property at all, instead it depends on what the observer knows or is able to find out! As a fairly strong impression, doesn't it seem that the Kolmogorov complexity is going to be a lot like this, that we're not going to get anything objective out of it, for defining a state of objective material consciousness?

Maybe the best approach here is just to admit that there isn't any truly objective separation between conscious matter and unconscious, at least nothing of an objectively *scientific* nature. Maybe it would be fair enough for each of us to have our own mystical ideas of why consciousness seems so absolutely special, the soul riding the evolving wavefront of the mind, if you like. In the meantime, no reason why we humans can't have our particular kinds of smart or insightful behaviors, no reason why animals can't be smart in *their* way, and no reason why artificial machines can't have some special kinds of smarts as well. In particular, if we are talking about whether a computer simulation of a human is conscious, then for observers outside the computer that's just going to depend on whether they can interact with the simulation and very sensitively judge its output as revealing quite humanlike capabilities. If the simulation is totally inaccessible for some reason, then questions about its state of consciousness are scientifically meaningless.

All in all, what I'm really saying is that Hal's "C" assumption, the one about the scientific objectivity or scientific reality of consciousness, is wrong, at least if this is taken to imply an absolute, scientifically verifiable distinction between conscious matter and any other bit of material.

David Blenkinsop <blenl@sk.sympatico.ca>