From: Dan Clemmensen (Dan@Clemmensen.ShireNet.com)
Date: Wed May 06 1998 - 18:11:21 MDT
Paul Hughes wrote:
>
> Dan Clemmensen wrote:
>
> > IMO an SI will very likely start as an augmented human,
> > a collaboration between one human and a computer. This SI can
> > rapidly augment the computer part of itself, so there is no
> > reason to assume that a second SI will come into existence prior
> > to the first SI taking over the world, or that a de novo AI will
> > ever come into existence. The SI may choose to decouple from its
> > human component, but such a decision, and indeed any decision made
> > by the SI, is beyond analysis by humans.
> > Please see: http://www.shirenet.com/~dgc/singularity/singularity.htm
>
> I've read over your essay a couple of times now. Compelling.
>
> Are you suggesting that the singularity will essentially consist of only
> one intelligence (i.e. one entity, one ego) with, if we are lucky, an
> arbitrary number of uploaded humans as cells in its brain? A borganism?
>
This is my opinion. However, I realize that it is only one of a wide range
of possibilities. There are a whole bunch of interesting things for use to
consider, and in retrospect I feel that dispite my own admonishments to the
contrary I have focused in on a particular point within the space of possible
singularity initiators: I've picked the "single human" spot on the
"initiator size" axis, and the "instantaneous" point on the "rate" axis.
I've also picked the "single entity" on the "post-human population size"
axis. All of these choices are in my little paper, but none of them are
sufficiently explicit and I spend little time on explaining the alternatives.
I rejected the "zero humans" point on the initiator axis because think AI
(i.e., no human component of the SI) is harder to achieve than a collaboration.
I rejected the larger numbers of humans because I guess I thought a multi-
human collaborative component was harder than a single-human component, but
I'd now guess that this is a weaker argument. Now, I would still assign
a near-zero probablity to the pure AI and a large probability to an initiator
size of one human, but I think a small group (say a workgroup at a lab) is
possible, and a larger colloboration {say an internet mailing list :-) }
is also possible.
I picked the "instant" point on the rate axis for the reasons stated in the
paper. Of course, I started with Vernor Vinge's "singularity" as part
of the title of my paper, so I guess I'm biased. This choice is easy to
attack on all kinds of perfectly reasonable grounds, mostly because the
term "singluarity" isn't the correct mathematical/physical analogy. It's
better to use a phase change as the analogy: The liquid-to-solid transition
of a supercooled liguid when a seed crystal is added. The analogy is, I think,
clear: the raw available computing power on the internet represents the
supercooled liguid and some as-yet-undeveloped software represents the
crystal.
It might be nice if some of the participants from the prior round of the
">Web" discussions would add any new thought they may have had on this matter.
As to uploads: the SI may or may not choose to permit or compel uploads, and
may or may not choose to permit the continued existence of humans in any form.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:04 MST