From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 27 2002 - 10:05:59 MST
Yes, I agree that my breakdown into possibilities A, B and C is a crude
categorization, and that there are many many sub-possibilities with
significant qualitative differences between them.
A mathematical formalization of the different varieties of self-modification
would be possible, but I'm not going to undertake that project today. Maybe
next month!
There is, however, a qualitative difference between (here I go again with
crude categorizations):
1) self-modification like that of the human brain, in which the low-level
learning algorithms and knowledge representation mechanisms are fixed, but
high-level learning algorithms and knowledge representation techniques are
learned and modified
2) self-modification like that of an uber-brain in which new
neurotransmitters could be inserted, modifications to the conductive
properties of neurons could be made, and so forth.
3) self-modification like that of a brain that could rebuild itself at the
molecular level, replacing neurons, synapses and glia with entirely
different sorts of structures
Novamente version 1 will have type 1 self-modification. This is required
for general intelligence, in my view.
Novamente version 2 will have type 2 self-modification. This is, in the
Novamente world, free ranging "schema modification"
Type 3 self-modification, in a digital context, is sort of like having the
system rewrite its own C++ source, or maybe even the C++ language or the OS
itself (the metaphor gets fuzzy)
Even without a math. formalization of these distinctions, I think the
qualitative nature of the distinctions should be at least *fairly* clear.
-- Ben G
> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Christian Szegedy
> Sent: Wednesday, February 27, 2002 9:38 AM
> To: sl4@sysopmind.com
> Subject: Re: Seed AI milestones (was: Microsoft aflare)
>
>
> Ben Goertzel wrote:
>
> >A) paths that begin with unintelligent self-modification
> >B) paths that begin with purposeful intelligent
> non-self-modifying behavior
> >C) paths that begin with a mixture of self-modification and purposeful
> >intelligent
> > behavior
> >
> >Eli and I, at this point, seem to share the intuition that B is the right
> >approach.
> >I have been clear on this for a while, but Eli's recent e-mail
> is the first
> >time
> >I've heard him clearly agree with me on this.
> >
> >Eugene, if your intuition is A, that's fine. In this case something like
> >Tierra
> >(which demonstrates robust self-modification, not leading to
> instant death)
> >may
> >be viewed as a step toward seed AI. However, the case of Tierra
> is a mild
> >counterargument to the A route, because its robust
> self-modification seems
> >to be
> >inadequately generative -- i.e. like all other Alife systems so far, it
> >yields
> >a certain amount of complexity and then develops no further.
> >
> I just ask: what do you mean by self-modification at all?
>
> An implementation has many levels: if you have a single executable which
> operates on some data
> , then the source code is constant, but the data changes. If you change
> or recompile the executable,
> then the operation system is constant. If you recompile the operation
> system, then
> the instruction set of the processor remains constant. If you have an
> FPGA and restructure it
> then the basis-structure of the FPGA is constant. If you reengineer your
> complete hardware, then
> the laws of the physics stay constant. Of course finer distinctions are
> also possible.
>
> I guess, you argued that the machine-code representation of the
> executable won't change in the
> first phase of the development of AI. I agree, but I still think that an
> intelligent entity must have a high
> degree of flexibility. So, the question is not really whether self
> modification is needed, but the type of self
> modification needed. I also agree with Eugene that a somewhat
> intelligent AI must possess
> a high degree of freedom (space for improvement,ability to
> learn,whatever you call it) and
> roobustness on the same time. Balancing this two will probably be a very
> essential aspect
> of constructing an AI. One of the most sensible tasks at all. (But I
> don't state that it is the only task.)
> And this is, most probably, not a static issue: you can't say that some
> given balance is optimal for all AIs.
> It is imaginable, that in the first phase you'll have to "add" more and
> more flexibility to your AI as
> ve gets increasingly intelligent. Perhaps you will have a lot of steps
> in increasing ver freedom, without
> allowing to modify ver machine code executable. The last steps in
> increasing ver freedom from a
> constant executable to complete self-reengineering within the laws of
> physics, will take much less
> time.
>
> It seems to be similar to the development of a child: first time he has
> no freedoms at all, because
> he would harm himself, but you can give him more and more freedoms as he
> learns to survive them.
>
> Of course this is merely philosophy. I just wanted to point out that we
> don't have possibility A,B and C,
> but possibility a0.0, ... possibility a0.2332 ... possibility a1.0.
> (probably on a much higher dimensional scale.)
>
> Best regards, Christian
>
>
>
>
>
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT