Re: Darwinian Extropy

From: Dan Clemmensen (dgc@shirenet.com)
Date: Sun Sep 22 1996 - 12:09:13 MDT


Robin Hanson wrote:
>
> Dan Clemmensen writes:
> >> >... There may not be a lot of diverse SIs
> >> >in the universe. There may be only one per system, and they may all
> >> >have reached the same super-logical conclusion that star travel is
> >> >uneconomical in terms of the resources that SIs use.
> >>
> >> ... to make your scenario plausible, you need a plausible process which
> >> creates this massive convergence to a preference with almost no weight
> >> on long-time-scale returns.
> >
> >... An SI is likely to have conscious control of its internal
> >archecture, so the postulated subconscious human group-think may not
> >be relevant.
> >Please note: I'm still not arguing that my model of an SI is the
> >correct one, only that it's plausible.
>
> It seems to me that in the absence of a process pushing conformity,
> one should expect diversity, at least when we're talking about
> motivations across the entire visible universe. Yes, it's possible
> there is such a process we don't know anything about, but this simple
> statement does not make the conclusion "plausible", only "posibble".
> Otherwise any not-logically-impossible conclusion would be
> "plausible".

I thought that I had presented the process in another portion of
my post: The SI can think so fast that on it's time-scale any possible
extra-system return is too far into the future to be useful in
comparison
to the forgone computational capability represented by the extra-system
probe's
mass. I proposed that the SI would increase its speed by several orders
of magnitude by converting its mass into a neutron star.

>
> >Your [Anders'] scenario may be plausible, but I feel that my scenario
> >is more likely: the Initial SI (for example an experimenter together
> >with a workstation and a bunch of software) is capable of rapid
> >self-augmentation. Since the experimenter and the experiment are
> >likely to be oriented toward developing an SI, the self-augmentation
> >is likely to result in rapid intelligence gain.
>
> Most complex systems we know of are capable of rapid
> self-augmentation. People can change, companies can change, and
> nations can change. *Useful* rapid change is a lot harder, however,
> and you have offered no plausible argument why such useful rapid
> change is any more likely here than for other complex systems. Again,
> yes, it is logically possible. But that is hardly a plausibility
> argument.
>

Unfortunately, as you say there seems to be little in the way a human or
corporation can do in the way of useful self-augmentation. I contend
that
an SI that includes a substantial computer component is very amenable to
useful self-augmentation, while people and organizations are not. The
reason:
the SI can understand itself and it can reprogram itself. I contend that
this
is fundamentally different than the process used by a human or a
corporation
atempting self-augmentation.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:45 MST