From: Ben Goertzel (ben@goertzel.org)
Date: Tue Apr 30 2002 - 08:36:33 MDT
> Basically the message can be sumarised into two sentences.
> 1) Self-modify/evolving software where the goal can be somehow
> modified is damn tricky.
> 2) Your programs are not omniscient, they may accidentally modify
> themselves so that they no longer follow their goals.
>
>
> Will
1) Creating a self-modifying, evolving software with a self-modifying goal
is tricky but not THAT tricky. The hard part is putting intelligence in the
mix, and getting it to work where the goal directly or indirectly involves
increasing intelligence. I assume this is what you meant though ;)
2) I tend to agree with you on this, Will. I imagine that in any
intelligent self-modifying system there may be significant "goal drift",
similar to genetic drift in population biology. Not because we will
necessarily have an evolving population of digital minds (though we might),
but because in my view cognitive dynamics itself is highly evolutionary in
nature. Eliezer's view of cognition is a bit different, which partially
explains his different intuitive estimate of the probability of significant
goal drift.
I also think that a transhuman AI is likely to transcend our human
categories of good/bad, friendly/unfriendly in ways we can't foresee now.
Much more profoundly than a human transcends a cats notion of
good-cat/bad-cat, friendly-cat/unfriendly-cat...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT