Thoughts, elite, future morality (was "ye are gods")

From: Samantha Atkins (samantha@objectent.com)
Date: Fri Sep 29 2000 - 18:38:17 MDT


Emlyn wrote ( a very good set of ideas I'm finally getting back to) :
>
>Samantha wrote:
> > On average, as a point of fact, there is not a person on this list who
> > doesn't "know better" than the vast majority of the world's population
> > about a great many things. I am sorry if it breaks local taboos to
> > point this out. Intelligence is not distributed evenly.
>
> ...and most of those people probably know at least one thing better than
> every person on this list. Scott Adams says, in the Dilbert Principle, that
> people are idiots. Even the smart people are only smart at certain times and
> in limited ways, and are for the most part fools. Maybe that's a stupid
> reference; I must be an idiot.

If you insist. But I don't think you are an idiot. Perhaps a little
overly cynical, but not an idiot. I do not agree with the statement
that even smart people are only smart at certain times and in limited
ways. Generally, people who are smarter than average are, well, smarter
than average and on more than a few limited things. Also it depends on
how you use "smarter". People who actually work at being rational
andbalanced and who have developed the habit of questioning and testing
even deeply cherished beliefs are "smarter" than those that have not and
do not. I would much rather have those people have a large vote on
important matters than folks who might believe more exactly what they
have been taught or spend every free minute drinking beer and watching
TV or the equivalent.

When it comes to technology related decisions the choice is even more
clear. We cannot afford for people who know nothing of technology,
could care less and are even anti-technology to control technology and
thus our futures.

> >
> > Do you actually ask for broad consensus for your own work, for every
> > design and implementation decision? No? Is that only because what you
> > are doing doesn't affect many people or is it also because most people
> > wouldn't have any idea what they were talking about if they tried to
> > advise you on your work?
>
> No, I don't ask for concensus under such conditions. Mostly because it
> doesn't affect many people, at least not in any way that they don't have
> control over; like, if I build some or all of a business website, the
> algorithm for importing log files into a database doesn't really affect a
> whole lot of people. Possibly such choices might effect people through the
> time it will take to implement them; if so, I'll go talk about it.
>
> Eventually this work will affect some people, particularly at a critical
> point; when it comes time to make it part of the production system. The main
> people it affects will be the system owners, and the effects possibly are
> quite important. So in that case, I certainly do seek consensus.
>

Really? Even from people who do not have the skills necessary to
evaluate the decision? If so, how does that help anyone?

 
> >
> > Should the human race only be allowed to advance in steps that were all
> > approved by the broad consensus? Should a poll have been taken before
> > we allowed that the sun is the central body of the solar system rather
> > than earth? Oh, you say, we don't need to get a consensus for facts.
> > But then why do you need a consensus to bring major advances into play
> > that 98% of the world's people never will understand well and that the
> > majority are singularly unqualified to pass judgement upon?
> >
>
> You're right; the existence of this ignorant 98% is obviously anathema to
> the advance of humanity as a whole.
>

I didn't say that. I do say letting the ignorant drive decisions that
they have no means to understand is grossly wrong and tremendously
dangerous.
 
> > I ask these things to open conversation rather than to say "You are
> > wrong." I actually sympathize with some of your concern. But I don't
> > see how waiting for consensus is a sign of proper diligence or will
> > actually help humanity at all. Call it elitist if it makes you feel
> > better, but I believe recognizing the paucity of intelligence is simple
> > honesty.
> >
>
> People piss me off too. Still, we all live here (in the universe); it's good
> when we can get along.
>

And when we can't? Who decides?
 
> Obviously I don't subscribe to this idea that the alleged 98% bozo factor
> should be lead by the natural leaders in the top 2% (amongst which I imagine
> you would count yourself). Possibly it's because I am worried that I'm more
> borderline, and might not make the cut! I wouldn't like that much, and I can
> empathise with others who feel the same way.
>

You can be quite bright and still not be a good leader. I am not
talking about a dictatorship of the elite or some such nonsense. I am
talking about not leading the uneducated and unqualified make decisions
just because they are there and we have some notion that everyone should
decide everything. I think it would be quite irresponsible to let this
notion determine our future. There isn't a cut on IQ or something like
that. I brought up this because I thought I was hearing from you that
"everyone" should decide major technological questions. I am attempting
to point out that isn't necessarily a good idea even if you could
implement it.
 
> > I also believe that failing to acknowledge one's intelligence and
> > ability to help choose and produce the future can be a false modesty
> > that keeps one from being fully and responsibly engaged. Failing to
> > step up and do what you can taking full responsibility is required of a
> > great number of us if human beings are to have a viable, much less
> > joyously abundant, future.
> >
>
> I'm not sending us out to the fields to be farmers.
>

Good! Now why was that the response? (not quite getting it)
 
> >
> > > This is a time of unparalleled change, and will look like a walk in the
> park
> > > next to the times to come. It is a time for humility in our approach,
> and
> > > special concern for the other beings that inhabit the planet; as it
> becomes
> > > easier for the few to ignore the wishes of the many, it becomes no more
> > > tolerable to do so.
> >
> > It is precisely because of HUGE concern for the needs of all humanity
> > that many of us became scientists and technologists and it is out of
> > that concern that we dream large dreams and see to what extent they can
> > become reality. We would be irresponsible, having been gifted or having
> > acquired such ability if we did not use it.
>
> I'm not sending us out to the fields to be farmers.
>

Good...
 
> >
> > The cutting edge of any species is the edge. It is not the consesus
> > masses. Why condemn the edge for being the edge? It is there that
> > advance will happen that lifts the whole.
> >
>
> Too cool. I'm all for that. Count me in! I'm not sending us out to the
> fields to be farmers.
>

OK.
 
> >
> > >
> > > Also, we are playing with fire - well, actually fire is a baby's toy
> > > compared to the stuff we are messing with now. It's not a good time to
> get
> > > complacent and arrogant - "we are as gods, ha ha ha!". It's time to be
> more
> > > humble than ever, to be open-system, to take in information from our
> > > environment. It's been discussed on the list just how dangerous some of
> the
> > > coming technologies are (ai, nanotech, etc), and if you go over the
> posts,
> > > you'll see that most of the danger is attributed to use of that
> technology
> > > by humans infected with the God meme. People who think that they know
> better
> > > than everyone else, who feel justified in producing externalities (like
> grey
> > > goo).
> > >
> >
> > Humility taken so far is for people who will deny their own strength AND
> > their huge responsibility. It is not a strategy that helps anyone.
> > Seeing the huge potentials for change is not something that makes me the
> > least bit complacent. It scares the heebie-jeebies out of me quite
> > often. But it is where the power and future of this species lie. Those
> > of us who are the forerunners, the intellectual scouts, the builders of
> > bridges between today and tomorrow, including those of us who
> > cross-check that we are keeping our wits about us, certainly cannot
> > afford to be complacent. But that doesn't mean we should stand aside or
> > be frightened to look and to attempt to chart a path forward that makes
> > the most sense and enables the best outcomes. After all, if we don't
> > make the attempt, then can we expect to take care of our default? If
> > not us, who?
>
> Humility is not fear. It's probably the opposite.
> >
> > Yes, we need as many voices and viewpoints as can be fruitfully
> > employed. Yes, we are talking about some of the most serious things
> > anyone has ever contemplated and our euphoria should be tempered with
> > quite a bit of sobriety.
> >
> > One thing that worries me is that we are quite good and coming up with
> > technology. We are not nearly so good at creating a unifying vision (or
> > sets of visions) that will more likely shape the use and unfolding of
> > the technology for the maximum good. If we don't create a vision or set
> > of positive visions to guide us then the technology will more than
> > likely greatly magnify all the good and bad tendencies in the world
> > today. I doubt that that is survivable.
> >
>
> On first reading, that sounded suspiciously like consensus seeking. But I
> think what you mean, is that someone needs to create a future vision for
> people to rally behind, so that we have some framework for moving into the
> future to full potential, whilst identifying potential problems and avoiding
> traps. I think that's probably the motivation for Transhumanism, in a
> nutshell.
>

Lets see if I can make clearer what I was trying to say. There is a
subtle difference between consensus seeking that is relatively passive
and consensus building which takes pulling peoples dreams and ideas out
and facilitating weaving them into a tapestry that a consensus can get
excited by and work for. It is the second that is most woefully needed
today. The vision does not come from on high exclusively handed down to
the masses. It includes the major dreams and leitmotifs to enroll many
different kinds of people.
   
> I support this. I support transhumanism. I'd even call myself a
> transhumanist. "Emlyn, you're a transhumanist. Nyah nyah!"
>
> What I'm talking about, in all these ravings and ramblings about humility,
> is the essense of transhumanism. As a vision, it is optimistic, it's forward
> thinking, it's possibly individualistic. Also, it's about people, and
> humanity, even as it seeks to leave humanity behind.
>

AMEN. I agree 100% and I thank you for pointing it out.

> I'm not talking about "banning" technologies; who's going to decide what to
> ban? I am interested in respecting the rest of humanity when we make
> changes.
>

Agreed. But what does that respect consists of and what does it not
consists of. That is an interesting (and difficult) question.

> What does that mean in concrete terms, applied to technology? It means that
> while scientific & technological progress is good, we must be careful about
> applications of technology.
>
> For instance, nanotech is a good idea, and ought to be developed. Being able
> to heal ourselves, repair ourselves, modify our selves, through nanotech, it
> also good (excellent!). Being able to modify other people, well, that's not
> so good. Releasing self replicators into the general environment without
> safeguards; not so good. Modifying the environment that others inhabit; not
> good, without some mechanism for consensus. That doesn't mean that we'd
> better not develop the technology. It means that we have to develop and
> promote a morality to go with it, about respecting other people.
>

I agree that we need to develop the morality. Very, very much so. But
I am not sure in practice what respecting other people's rights includes
and does not include. Does it include the right to believe the world is
not as it is, to out and out deny reality and to force others to obey
decisions based on that denial and subsequent fantasies? For it should
go both ways. If we can't force our notions on them they should not be
able to force theirs on us.

A thought experiment. I am not seriously advocating this but I think it
bears thinking about. What if 98% of the world (or some sizeable
fraction of humanity) is seriously broken in some physical/psychological
way and the technology exists to fix this and is even free? What if
most of these folks, in their brokenness, refuse this technology? What
if their broken condition is actually doing serious injury to
themselves, to the environment and to others. Do we have the right to
put a mickey in the water or nanobots that will perform the fix in the
environment? I would say no. But it is not a simple situation.

Here is another. We are at the Singularity or close enough that the
transhuman folks are radically different from and vastly more capable
than the "mere" humans. In most fields mere humans are simply unable
to compete. This is causing considerable friction. We transhumans have
more than adequate resources to take care of the mere humans. But many
of them hate and despise us and actively seek our destruction. Many
feel our very existence is such an affront to their sensibilities that
we cannnot be tolerated. They are a danger to themselves and one
another if not to us. . Do we:

a) pop them into a VR tank where they can live in whatever fantasy they
wish for as long as they wish and change their minds whenever they want?

voluntary or not?

would we put them in involuntarily if that was the only way they could
survive without being forced by reality to change in ways that are
utterly repugnant to them?

b) ignore them and simply defend ourselves against assaults?

c) leave them on Earth and head off elsewhere?

 
> Also, AI is good; intelligence augmentation is fantastic. Creating a self
> modifying super intelligence is good. Turning it on, and plugging it into
> the world's computer systems, without asking everyone else if that'd be ok
> by them, that's not good. That's very naughty. That's could be called
> sociopathic.
>

So, do you think the masses of people would ever agree to let this loose
under any circumstances? Or would the vast majority call for its
immediate destruction? To educate a fully self-learning AI of great
power requires huge amounts of input. If you don't get it from the Net
where will you get it? I agree it is questionable to plug it in. It is
also questionable to try to make the decision a case of majority vote.
It is questionable to not plug it in also.

> Working on GM food is good. Great! Designing crops to feed more people,
> excellent. Designing crops with terminator genes, dodgy, but still ok,
> maybe. Releasing self replicators into the general environment without
> safeguards, not so good.
>

Actually GM food is absolutely essential. Correct me if I'm wrong but
some of those terminator genes were actually necessary for the integrity
of certain types GM itself. Some of the termination gene stuff was to
prevent the mutant seeds FROM being full self-replicators. Granted some
of the motives behind that were more about greed.

On the other hand, self-replicators are created and released into the
environment by natural processes all the time. Many of them are quite
dangerous. We should definitely think long and hard about our own
designed ones but I see no reason self-replication as such should be
banned or is automatically bad. Some types of problems and some goals
we all find pretty wonderful cannot be addressed without
self-replicating entities.
    
> I could be very wrong about the technologies I've outlined above. That's the
> point. I'm not about to appoint myself the arbiter of human morality for the
> 21st century (although apparently I do have an opinion). I'd appreciate it
> if others afforded the same respect to me.
>

People who are concerned and capable should make decisions in this
area. It is not going to be easy. Decisions that pull in as many
people as possible who can contribute must be made though. The future
will not be what we want it simply by default and happenstance.

> As an aside, I think it's funny that transhumanism, which purports to being
> about moving away from humanity, is actually more about what it is,
> fundamentally, to be human, than any other belief system/philosophy/vision
> that I can think of. Its about us merging more fully with our tools,
> believing more strongly in our ability to reshape the universe, believing in
> ourselves rather than some unknowable greater force(s). What is more
> essentially human, than the intimate relationship we have with our tools,
> and our desire to reshape our environment using them? To lose that focus,
> and that ability, makes us less human; so to strengthen it, makes us more
> so. The posthumans of our vision, supposedly having left humanity behind,
> will be paradoxically, maximally human. Possibly not organic, but 100%
> natural, certainly.
>

Very well said!

- samantha



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:18 MST