Eugene Leitl wrote:
>
> I agree this was a bit too verbose, but at least for me the debate was
> fruitful. The logics of it all drove me to adopt a position I formerly
> wouldn't dream to hold. Strange, strange world.
Well, at least you're moving forward.
> No, I'm more interested in nanotechnology, specifically molecular
> circuitry.
Didn't you just say those labs should be nuked?
> Meanwhile, I would ask of you and Eliezer to reevaluate your project,
> specifically reassessing that what you're trying to build is indeed
> that what you will wind up with, especially if you decide to use
> evolutionary algorithms as part of the seed technology.
Part of my point, when I asked whether you were willing to second-guess the
researchers, was that - whatever our relative knowledge now - my scenario
doesn't call for me to make judgements in these matters with zero experience.
By the time we're ready to go for the Big One, we'll have had the opportunity
to thoroughly observe the (current) behavior of goals in the AI, and see for
ourselves whether the AI tends to misinterpret or twist our suggestions.
If the AI exhibits a genuine spirit of friendliness (note the small 'f'); if
the AI successfully comes up with reasonable and friendly answers for
ambiguous and unspecified use-cases; and if the AI goes through at least one
unaided change of personal philosophy or cognitive architecture while
preserving Friendliness - if it gets to the point where the AI is clearly more
altruistic than the programmers *and* smarter about what constitutes altruism
- then why not go for it?
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT