From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Wed Mar 06 2002 - 03:51:00 MST
On Wed, 6 Mar 2002, Colin Hales wrote:
> This, I feel, is more likely how our AI progeny will treat us if we
> design them correctly (one of Banks' themes involves just an errant
Co-evolution of strategies implies unpredictability. Hence the
requirements for docility and power are mutually exclusive. Tanstaafl,
there's no power without the price, etc.
Immediately following from this: if you've built a powerful AI, you've
built a dangerous one.
> ship, I think, where the beastie is a little on the sociopath side of
> things). So many SF novels. They all blur into each other after a while.
> Poor little human me.
-- Eugen* Leitl leitl
______________________________________________________________
ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.leitl.org
57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:47 MST