From: Dan Clemmensen (dgc@shirenet.com)
Date: Sun Sep 22 1996 - 20:33:45 MDT
Robin Hanson wrote:
>
> Dan Clemmensen writes:
> > >> ... to make your scenario plausible, you need a plausible process which
> > >> creates this massive convergence to a preference with almost no weight
> > >> on long-time-scale returns.
> >
> >The SI can think so fast that on it's time-scale any possible
> >extra-system return is too far into the future to be useful in
> >comparison to the forgone computational capability represented by the
> >extra-system probe's mass. I proposed that the SI would increase its
> >speed by several orders of magnitude by converting its mass into a
> >neutron star.
>
> You seem to think that there is some natural discount rate, determined
> by the computer hardware. I don't see why.
>
I'm not an economist, so I don't immediately convert the problem to
one of discount rate. I'll try not to mangle the concept. Basically,
I'm arguing that the discount rate is very high because the SI ability
to
employ the mass of the probe for computation is so large. the only
things
an extra-system probe can eventually contribute are 1) a retrun of
extra-system
mass, or 2) a return of extra-system information. I'm assuming that the
SI will decide that it can use the computational power represented by
the
probe's mass to produce the information locally long before the
information
can be returned fromn an extra-system source. That is, I see that one
side of the
discount ration is very large, and I don't see any equivalent large
value on
the other side of the ratio.
> >Unfortunately, as you say there seems to be little in the way a human
> >or corporation can do in the way of useful self-augmentation. I
> >contend that an SI that includes a substantial computer component is
> >very amenable to useful self-augmentation, while people and
> >organizations are not. The reason: the SI can understand itself and it
> >can reprogram itself. I contend that this is fundamentally different
> >than the process used by a human or a corporation atempting
> >self-augmentation.
>
> Why do you think an SI will understand itself any more than we
> understand ourselves? And even if it could, that doesn't mean such
> understanding will lead to much improvement.
>
Basically, I don't believe that we understand the basics of human
cognition.Therefore our attempts at self-augmentation have no firm
basis.
We do, however, understand the basics of machine computation:
we can design and build more powerful computer hardware and software.
Since
we understand this basis already, I believe that an SI can also
understand it.
I believe that an SI with a computer component will be able to design
and
build ever more powerful hardware and software, thus increasing its
own capabilities. I think that this is likely to lead not just to an
improvement, but to a rapid feedback process.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:45 MST