From: Joe E. Dees (jdees0@students.uwf.edu)
Date: Fri Sep 11 1998 - 20:15:38 MDT
Date sent: Fri, 11 Sep 1998 18:26:09 -0700
From: Hal Finney <hal@rain.org>
To: extropians@extropy.com
Subject: Re: Singularity: Human AI to superhuman
Send reply to: extropians@extropy.com
> One thing I'm wondering about is whether the goal of superhuman
> intelligence is well defined.
>
> Eliezer lays out a scenario in which a machine designs a better machine,
> and that one designs a still better one, and so on. But better at what?
> It has to be more than just designing the next machine, because otherwise
> the improvement is meaningless. There must be some grounding, some
> objective criteria, that can be used to determine if one design is better
> than another.
>
> In the initial stages, conventional IQ tests would probably be useful
> as a metric. Machines which could score ever higher on such tests would
> probably be able to better design new versions of themselves.
>
> But this might not be the fastest path towards optimization. Consider the
> "Browns" of Niven and Pournelle's Motie stories, idiot savant engineers,
> able to design and build things with lightning efficiency. They might
> not do well on IQ tests, but they would be ideal for designing new AIs
> with even more extreme talents. How do the AIs (and we ourselves, in
> the early stages) decide whether to take this route or to emphasize more
> general abilities? Which mental skills should they retain and enhance,
> assuming they have a range of designs available to themselves?
>
> Even if we want to stick with IQ tests, these run out of steam at some
> point as the AIs become too smart. Then the AIs have to start creating
> new IQ tests which will challenge the next generation. Can this be done
> in a reliable and meaningful way? How can I judge an intelligence which
> is greater than my own?
>
> I suppose one possibility is to take problems that I can solve in a week
> and ask for an intelligence which can solve them in a day. But is greater
> speed all we seek in superhuman AI? It seems to me that more intelligent
> people are not only able to solve problems faster, but they are able to
> exhibit a greater depth of understanding, an ability to intuitively deal
> with some problems that less intelligent people can't handle at all.
> How do you evaluate an intelligence which has abilities that you can't
> understand?
>
> A couple of other minor points. First, these are not self-improving
> machines. Each generation designs a new machine which is different
> from itself. Chances are that its identity can't carry over to the new
> design, assuming there is substantial architectural change. So what
> we have is a series of generations of new machines and new identities.
> Eliezer's point about goal drift becomes more relevant when each new
> machine is a new individual, one only poorly understood by its creators.
> It would be a shame if some machine along the path developed a hobby
> and bent the skills of later machines into the superhuman equivalent of
> inventing anagrams.
>
> It may also be that the whole idea of ever-increasing superintelligence
> is incoherent. Intelligence may turn out to be just a matter of searching
> through a solution space. Brains are only moderately good at this,
> with more intelligent people having more efficient search capabilities.
> If so, then we may hit a ceiling in terms of intelligence per amount of
> computer power. We can move beyond human genius levels but quickly reach
> limits beyond which search problems explode exponentially. The result is
> that a tenfold increase in computer power brings only a modest improvement
> in problem-solving ability.
>
> Hal
What about the internal construction of more complete and more
accurate (i.e. more reliable) internal models of our surroundings
(cosmology) and the most efficient ways they and those we share
them with can be manipulated or influenced (physics, psychology)?
This is, after all, what gives our brains and senses their usefulness
and survival value, and ultimately, is the engine which has driven
cognitive development and given us our evolutionary edge. The only
problem I can see with such a criterion is our lack of a standard (a
perfect model) by which to evaluate those produced by our
candidates.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:34 MST