Re: Humanrintelligences' motivation (Was: Superintelligences' motivation)

From: Max M (maxmcorp@inet.uni-c.dk)
Date: Wed Jan 29 1997 - 16:59:20 MST


----------
> From: N.BOSTROM@lse.ac.uk

> Thus, in order to predict the long term development of the most
> interesting aspects of the world, the most relevant considerations
> will be (1) the fundamental physical constraints; and (2) the
> higher-order desires of the agents that have the most power at the
> time when technologies become available for choosing our
> first-order desires.

Good posting, but theres another minor motivation issue, wich i actually
think is the most dangerous thing about extropy, transhumanism, nanotech,
replicators etc. And that is human goals and motivations.
We don't need advanced AI or IA but just plain simple exponential growth to
give extremists and minorites acces to weaons of massive destruction.
Somehow we need to change the goals and motivations of these dangerous
minorities. It's a problem that is often passed over lightly when "we" talk
about our rosy future, the singularity etc. But it only takes one madman
with the recipe for Gray Goo to destroy the world.
Currently there are enough of them to go around. :-(

With a future where there's a risk of a techno elite to hold the power,
there's a big chance of unsatisfied masses instead of minorities, with an
abundance of unsatisfied "loosers" willing to press the button in the hope
of some kind of chance.

We need to adress this!

There's no worse enemy than an enemy with nothing to loose.

MAX M Rasmussen
New Media Director

Private: maxmcorp@inet.uni-c.dk
         http://inet.uni-c.dk/~maxmcorp

Work: maxm@novavision.dk
         http://www.novavision.dk/

This is my way cool signature message!!



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:06 MST