From: Anders Sandberg (asa@nada.kth.se)
Date: Mon Nov 24 1997 - 16:10:44 MST
Brent Allsop <allsop@swttools.fc.hp.com> writes:
> Needed for what? What is it that is "over" in 50 years? Is
> there some God or machine or something that will kill us because we
> are not "needed"? Why? This all seems bassed on completely backwards
> assumptions and absurdities. We are the ones that have need aren't
> we? Not the other way around or not something that needs us!?
>
> In a world of plenty there is plenty to fulfill all needs.
> There will be Gods and machines just waiting to do whatever it is each
> and everyone on of *US* needs, whatever we are and whatever we freely
> decide that we need.
Your vision sounds nice, but can it be guaranteed? We live in a world
where things do go wrong, where unexpected things happens despite
precautions and where not everything is done with positive
consequences in mind. Our development is linked with dangers, and
the future tends to turn out vastly different than our predictions.
I don't believe in the "robots peeling grapes for us scenario" nor the
"humans red queened and darwinnowed scenario". But we better make sure
the future turns out reasonably well (or better, unreasonably well) -
we have a respondibility for it. And that includes analysing possible
risks and taking steps to minimize or circumvent them.
There is a lot of unreasonable fear about SI, even among
transhumanists (we have had this debate often enough on this list),
and it is quite natural that people worry about the future of the
human species or its close descendants. While we can of course try to
develop AI and other autonomous technology that serves us (say by
being created already in love with humans or similar stuff), things
can go wrong and nasty systems could be built for a variety of
reasons. Even in the grape scenario it may turn out that the humans
turn into traditional pets for the posthuman powers, and that doesn't
appeal to our sense of dignity. So it is IMHO worthwhile to see if we
can make sure humans remain an important and useful part of the future
infra/ultra structure, whatever it is.
My personal suggestion is to concentrate on intelligence amplification
in various forms; it both has practical uses right away, and may be a
good way to bootstrap superintelligence that has humans and human
memetic complexes as integral parts. In the long run the human aspect
may become insignificant compared to the rest of the entity, but it is
still there after an exponential but continous development. How IA is
implemented is another question, we already have plenty of tools:
hypertext, agents, cognitive psychology, nootropics and expert
systems; in the future we may add bionic interfaces and uploading -
there is a broad research front here with a lot of worthwhile stuff
both for the present and the uncertain future.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:45:09 MST