Re: Transparency and IP

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Sep 14 2000 - 02:24:24 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >
> > Eugene Leitl wrote:
> > >
> > > Abundance mode? For how long? There aren't that many atoms in the
> > > solar system...
> >
> > With nanotech on the
> > horizon we will most likely be in abundance mode on earth for most
> > physical goods within the next 4-5 decades...
>
> Actually, Eugene's suggestion on this score makes sense. After all, running
> at a millionfold or billionfold speedup, and with some humans still wanting to
> reproduce, it's easily conceivable that all the Solar System's resources will
> be sucked up within the first few hours.

It seems to me pretty silly to simply scale up humans as they are today
and give them godlike abilities to be super-duper apes instead of merely
slightly evolved ones. You won't sell much progress if that is the way
of it. Fortunately, changing ourselves is also massively enabled.
Also, reproduction is much less turned to when they are a variety of
other interesting things to do even among mere mortals. Lastly,
physical reproduction and all physical processes of incorporating
physical materials are governed by physical speed limits. Not by how
fast the AI can run. Trillions of human beings simply cannot be
produced in "a few hours". Building tools and transport and disposal
units and processing the entire solar system simply is not the work of
"a few hours". Even if you could get all of the sentient beings
involved to agree, which is extremely doubtful.

>
> My suggestion to the Sysop would be to impose a minimum resource requirement
> before someone creates a child; the child has to have at least enough
> computational resources to run at a reasonable speed with a reasonably-sized
> mind until the Big Crunch. This way, even if people try to reproduce at the
> maximum possible speed using their share of the Solar System, standards of
> living will never drop unacceptably.
>

Your Sysop has extremely serious problems in its design. It is expected
to know how to resolve the problems and issues of other sentient beings
(us) without having ever experienced what it is to be us. If it is
trained to model us well enough to understand and therefore to wisely
resolve conflicts then it will in the process become subject potentially
to some of the same troubling issues. There is also the problem of what
gives this super-duper-AI its own basic goals and desires. Supposedly
the originals come from the humans who build/train it. It then
exptrapolates super-fast off of that original matrix. Hmmm. So how are
we going to know, except too late, whether that set included a lot of
things very dangerous in the AI? Or if the set is ultimately
self-defeating? Personally I think such a creature would like be
autistic, in that it would not be able to successfully model/understand
other sentient beings and/or go catatonic because it does not have
enough of a core to self-generate goals and desires that will keep it
going.

Your scenario assumes we all will want to go onto the chips, to live
inside of VR space. While that looks largely attractive in many ways I
would also expect to have the option to form external, "real-world"
bodies and extensions to do "real-world" work and gain a different level
of experiences. Some other beings will not wish to ever make a home in
VR land. If the AI respects us it will need to make some room for our
choices (within reason). I am not sure how it will establish its Pax
SysOp without doing violence to at least parts of humanity.

- samantha



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:58 MST