RE: Singularity?

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Wed Sep 01 1999 - 23:39:04 MDT


Billy Brown writes:

> The determining factor on this issue is how hard AI and intelligence
> enhancement turn out to be. If intelligence is staggeringly complex, and

I think intelligence enhancement (i.e. beyond the trivial: as in
augmented reality/wearable) will be very very hard. AI, provided the
hardware is moving into the right direction (people have recently
started buying the memory-on-die/ultra-wide-buses/massively parallel
pile of chips as well as started looking into molecular electronics)
will be comparatively easy, if you do it the ALife way. Contrary to
what Eliezer profusely professes, human-coded AI will never become
relevant.

Providing boundary conditions for emergence, yes. Coding it all by
hand, never.

> requires opaque data structures that are generally inscrutable to beings of
> human intelligence, we get one kind of future (nanotech and human

You don't need to understand the physics of your mental process to be
intelligent. Evolution is not sentient, yet it apparently produces
sentience. Providing a good evolvable nucleus in a dramatically sped
up coevolution sure does sound like a winner to me. Once you get this
started you don't have to worry about optimizing initial conditions,
because these don't matter for the final result.

> enhancement come online relatively slowly, AI creeps along at a modest rate,
> and enhanced humans have a good shot at staying in charge). If intelligence

Uh, don't think so. The threshold for enhancing humans is terribly
high: essentially you'll need to be able to do uploads. Anything else
is comparatively insignificant. AI might be here much sooner than
uploads. Being nonhuman and subject to positive autofeedback makes it
a very dangerous thing to build indeed. Maybe we need another
Kaczynski...

> can be achieved using relatively comprehensible programming techniques, such
> that a sentient AI can understand its own operation, we get a very different
> kind of future (very fast AI progresss leading to a rapid Singularity, with
> essentially no chance for humanity to keep up). Either way, the kind of
> future we end up in has absolutely nothing to do with the decisions we make.
>
> Personally, I feel that the first scenario is somewhat more likely than the
> second. However, I can't know for sure until we get a lot closer to
> actually having sentient AI. It therefore pays to take out some insurance
> by doing what I can to make sure that if we do end up in the second kind of
> future we won't screw things up. So far, the best way I can see to
> influence events is to try to end up being one of the people doing the work
> (although I'm working on being one of the funding sources, which could
> potentially be a better angle).

I think one of the best projects for funding is brain vitrification
which does not require fractal cooling channel plumbing in vivo.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:59 MST