Re: Gattaca on TV this weekend

From: Eugen Leitl (eugen@leitl.org)
Date: Fri Jun 21 2002 - 03:26:32 MDT


On Thu, 20 Jun 2002, Brian Atkins wrote:

> This is not a real answer... you made a fairly strong assertion.

It doesn't require a Ph.D. to realize that a fast change is less than
beneficial for slow-adapting agents. It's rather well documented that
grand extinctions (6th, and counting) are causally linked to rapid changes
in the environment.

It seems the only validation acceptable to you is actually hindsight
(let's precipitate the Singularity, and see). Given the risks, that's
simply unacceptable.
 
> Economics don't really come into this picture when we have full nanotech.

This is an assertion based on pure wishful thinking. Nanotechnology and
hypereconomy does not put an end to value of all goods. The value is never
zero even for the goods that are cheap. The value can become arbitrarily
high (in terms of current value) for rare items.

> We can probably grow an uploading machine out of the local materials outside
> every town on Earth for free. Computronium to run them is essentially free

It's not actually free, because those resources (designing, planning,
deploying, energy, space) are unavailable for other, competing projects.
The bottleneck would be design of the uploading box. The S in SI doesn't
stand for Santa Claus. If it can do something, it doesn't mean it is
motivated to do that.

> as well. Where exactly do you see economics coming into the picture when we
> are talking SIs and full nano? The amount of "money" required to upload

Economics never even leaves the picture. Competing SIs are engaged in
activities. If you're not an SI you're a negligible player on such
markets. At best you're a minor nuisance, at worst you've died when the
SIs came to power.

> everyone is practically zero compared to the wealth available at the time.
>
> And anyone who uploads can be almost instantly brought "up to date" if
> they want. No problemo most likely.

We don't even know we can do uploads yet. Simulating all of the biology
accurately is out of question (an issue of both the model accuracy and
resource requirements), and it is not obvious we can make
self-bootstrapping abstracted models. We clearly can't make glib
statements about properties of such uploads, their role and value in the
new world order.
 
> Culture is the issue that will hold many people back from taking advantage
> of such a scenario, but there's not much technology can do about that other
> than attempts to persuade them that aren't deemed to be taking advantage of
> their low intelligence. Actually I can't say that for sure since there is
> always the chance the group of uploaders may decide that forcibly uploading
> everyone is preferrable for some reason I can't envision right now. At any

I can think of a reason: preventing their impending extinction. Like
hauling away live frogs in buckets from a habitat about to be bulldozered.

> rate, if this does cause any lack of participation or anger on the part of
> people "left behind" they have no one to blame but themselves. I don't see
> this as an important reason to postpone a potential Singularity. If we had to
> wait until everyone was comfy with the idea millions of people will die in
> the meantime.

Which is why we need validation and deployment of life extension and
radical life extension technologies on a global scale.
 
> Well we already have nanotech guidelines from Foresight and AI guidelines
> from SIAI including ideas on how to carefully proceed in developing a

The SIAI "guidelines" and Foresight guidelines are simply not in the same
league. (I'm being polite here).

I'm not aware of a single concise list of don'ts in AI development which
is accepted/peer reviewed. Right now most serious AI researchers would
claim that this is premature, considering the state of the art.

> seed AI as well as ideas on how to test it before release. These ideas
> will be improved as time goes on (especially if more people give us really
> good criticism!). Isn't this good enough? What exactly do you need to see
> before you feel it would be safe to allow real AI development?

Human competitive general AI is never safe. Especially, if it can profit
from positive feedback in self-enhancement. We humans cannot do this
currently, so we would be facing a new competitor with a key advantage.

It is crucial that we humans address those shortcomings as soon as
possible, orelse our continuing sustainable survival is at stake.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:56 MST