Re: TECH: Fractal Tardis Brains

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jun 12 1999 - 20:50:38 MDT


hal@rain.org wrote:
>
> Eliezer S. Yudkowsky, <sentience@pobox.com>, writes:

> > I entirely disagree. These things aren't limits. Laws, maybe, but not
> > limits. This century's history has been the history of people realizing
> > that so-called "limits" had been obviously bankrupt from the beginning.
> > 100 years CRNS (current-rate no-Singularity) from now, everyone will be
> > laughing at us for believing in the lightspeed limit when there was
> > General Relativity, wormholes, Warp-Tardis Drive...
>
> I am confused here whether you consider lightspeed to be a limit or a law.
> Is it something that we will transcend, or a law which we will use as
> a tool?

I consider it to be part of the structure of reality, but not
necessarily a limit. Will anyone ever step on the gas and go faster
than "C"? Not without magic. But that doesn't mean you can't get to
Alpha Centauri in less than four years; you can compress space, warp
space, go to your destination in 4.3 years and then loop around a Tipler
cylinder, etc. etc. Special Relativity imposes an absolute lightspeed
limit on instantaneous velocity relative to immediate space, but when
you consider actual travel, on a global scale, then you start dealing
with General Relativity - the speed of light is also relative.

> > As for the uncertainty principle, despite the name, it isn't a limit on
> > knowledge. Not at all. It describes a very specific process known as
> > state-vector reduction which randomizes certain quantities at a certain
> > point in time. This process, in turn, has all kinds of interesting
> > potential - including an apparent FTL propagation, come to think of it.
> > Calling it a "limit" is abusing the term, if you ask me. I say it's a tool.
>
> I can see that such things as quantum uncertainty, or conservation
> of energy, or the law of gravity can be seen either as limits that
> constrain what we do, or as tools that we can exploit to accomplish
> our aims. It is a bit harder to see how FTL limitations can be used
> as a tool, but perhaps once we are able to bump up against them things
> will look different. Black hole engineering, for example, is intimately
> intertwined with the FTL limits and may be an important tool someday.

Precisely. I don't think the term "physical limit" is appropriate for
anything on which a paper has been published showing a loophole. Then
it's a "practical limit", and we know what happens to those...

> > Actually, my position that all laws are malleable isn't based on an
> > Extropian morality; it's based on my ontological belief that "laws" are
> > actually "stuff". Anything real can be modified; if laws are real, they
> > can be modified. A rigid, Turing-like distinction between "program" and
> > "content", or "rules" and "cells", starts getting you into the same
> > paradoxes that made me a noncomputationalist in the first place.
>
> This is an interesting speculation, but it is not really a philosophically
> defensible derivation IMO. You need to analyze the meaning of "real"
> very carefully. I suspect that when you do, you find that when you say
> "anything real can be modified" you mean one thing, and when you say that
> "laws are real" you mean something else by "real".

That is *exactly* what I mean the same thing by, as it were. I don't
believe in Platonic laws. All laws are stuff.

> When you say that you are a noncomputationalist, do you mean that it will
> be impossible to construct a working AI which is conscious?

I mean it'll take special hardware for old-fashioned qualia - probably
not very difficult hardware, either, since an ordinary brain can hack
it. Almost certainly, human-equivalent intelligence won't take any
special hardware.

> Or do you
> mean that computational worlds holding intelligent entities may exist,
> but that our own particular world is not computational, because of certain
> specific characteristics that may not be shared with other worlds?

Yes. Turing machines can be intelligent, but "intelligence" is an
observer-relative property; there's no absolute property test for
"intelligence". I think that having any sort of absolute property test
obviously requires an absolute test for "instantiation", a concept which
has no mathematical definition. In fact, my attempts to construct a
definition led me to think that instantiation is fundamentally
observer-relative. Since I believe in an absolute test for "reality"
and "consciousness", ergo reality and consciousness are noncomputable.
But I don't believe in an absolute test for "intelligence", and so I see
no reason why I can't construct a transhuman AI.

I don't believe that "2 + 2 = 4" is a fundamental law, and I don't
believe that "2 + 2 = 4" is real. I can't define "instantiation" and I
don't believe it can be defined; either everything Turing-computable is
real, or nothing Turing-computable is real, and I believe "nothing". "2
+ 2 = 4" is Turing computable, so it's not real. Arithmetic is an
abstracted property, a derivative of underlying reality, that is
"proved" by induction and that I use because it's convenient. Maybe the
Turing laws and mathematics can be derived from the underlying laws in
some way; even so, it's still just an abstraction - and I don't believe
that abstractions are objectively real.

I'm extremely conservative when it comes to reality. I'm willing to
believe that quarks are objectively real. I'm willing to believe that
qualia are objectively real. I don't believe in the laws of physics,
mathematical theorems, apples, or any other abstracted properties.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:09 MST