From: Krekoski Ross (rosskrekoski@gmail.com)
Date: Tue Apr 29 2008 - 12:05:17 MDT
On Tue, Apr 29, 2008 at 11:05 AM, Stathis Papaioannou <stathisp@gmail.com>
wrote:
>
>
> The drug addict could reason thus:
> That's for a start, and if I find that my new
> programming leads me to behaviour that my higher order mind tells me
> is undesirable, I will adjust it accordingly.
What if we, in the process, modify our higher-order mind? Its not a trivial
thought-- while I think there would be emergent similarities in all
intelligent life, there are two things to keep in mind: 1) that these are
*emergent* similarities, and still dependant upon the overall architecture
of the organism. 2) the desire to stay alive as an individual or as a
species is, with a very high degree of probability, a genetic
predisposition. If we start modifying things, we may find that we dont
particularly care about continuation as an individual.
I guess though that it brings up a big question: Is a brain (or a computer)
capable of understanding itself in its full complexity, or is the capacity
for intelligence that emerges from a system only capable of understanding a
subset of the system itself. (think godel). If the former, we should have no
problem with modifying ourselves. If the latter, self-modification by any
intelligent system is potentially dangerous in the long term, and the only
'stable' means by which intelligence can increase is either analogous to
evolution in some respects, or very very slow incremental change. I guess
one question that we could ask to address the question is: "is an
intelligent system capable of perfect simulation of itself?" I dont know if
theres a really good answer, the only solution I can think of that uses the
metaphor of current technology is an OS emulating a copy of itself within
the OS, but then of course, the emulated OS is not itself emulating an OS,
so you run into recursive problems.
Ross
>
>
>
>
>
> --
> Stathis Papaioannou
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT