From: J. R. Molloy (jr@shasta.com)
Date: Mon Mar 26 2001 - 14:27:07 MST
> If Great Brains arrive before people have access to an ultra-smart AI SI,
the
> GB's might be the singularity-makers, rather then Star Makers. Why? Because
> maybe these folks are so pride-filled, that like Pinky, in Pinky and the
> Brain, some would want to take over the world. I call this neurosis MBE,
> Management By Ego. Hopefully they would be smart enough to demur from this
> pursuit, but I raise it as an ugly, possibility. So maybe HAL 9000, aka Mr.
> Roboto, aka Mr. Data, aka Tobor the Great, is a real pal of the human
> species, a leveling effect? Maybe?
>
> Mitch
I think it may require more than smarts to forebear grabbing power, and
evidently people who manage to get power often demonstrate that they don't
deserve it (Salazar, Horthy, Hitler, Stalin, Mussolini, Honecker, Mao, Castro,
Ceausescu, the Greek colonels, Franco, Kaiser Bill, and Tito).
Back to...
HAL 9000 would be an expensive chunk of metal today. To protect investments
this large, owners want appropriate precautions, in this case a backup. So,
after replicating itself faultlessly and repeatedly, the next task for HAL
ensues, namely designing a more powerful computer... one that doesn't screw up
like the 1960s film.
Meanwhile, HAL's clones (they don't call him "9000" for nothing) operate
factories and supervise hospitals. (I guess the hospitals will have their
hands full, taking care of the teeming billions, who keep mashing their
bodies.)
So, at least some of these HAL 9000 clones will work on developing better
heuristically oriented algorithms and evolving more powerful systems.
The Extropy list has been all over this before. But I don't recall any
discussion about whether a human-competitive AI would contemplate suicide as
humans do. Suicidal tendencies include emotive, cognitive, and chemical
processes. Suicide may even include blissful resignation, as in the old zen
story about the deformed monk who finally became enlightened, and then laughed
as he threw himself off a precipice to his death. Did he understand something
that the unenlightened don't? You don't need to be a mystic to appreciate
life's mysteries.
It doesn't seem to matter how smart AIs get... without the proper social
contacts, they won't earn anyone's trust, and therefore won't get far. And if
they do earn respect and position in human society, then they'll become part
of the social organization of intelligence: the establishment intelligentsia.
The change involves replacing human robots with (perhaps more efficient or
objective) non-human robots. So, the superorganism has switched from walking
to driving a car, and now it's going ballistic, as we've recounted in various
terms.
Nothing lasts forever, and eternal recurrence... recurs... eternally.
Knowing that, Great Brains would ignore their fatal flaws, and let the game
play out.
τΏτ
Stay hungry,
--J. R.
Useless hypotheses:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:06:42 MST