From: Samantha Atkins (samantha@objectent.com)
Date: Sun Mar 24 2002 - 13:44:59 MST
Robert J. Bradbury wrote:
> (I'm somewhat playing devil's advocate here...)
>
> On Sun, 24 Mar 2002, Amara Graps wrote:
>
>
>>The reason I don't like this ('adapt people to technology')
>>perspective is that it can easily skew the humans' value system to
>>one of: technology has a higher value than humans.
>>
>
> When you realize what an absolute kludge humans are (speaking
> from the standpoint of a programmer) this isn't very hard.
When you realize what an absolute kludge most computer programms
are (speaking from the standpoint of a world-class programmer
who happens to be human) it isn't very hard to see both in
perspective as having very serious flaws. :-)
>
>
>>I think that _humans_ are first and foremost and from here,
>>we seek means of survival (*and growth*) on this planet/outwards
>>working, as best we can, with the complex interaction between
>>the Earth's biology, sociology, chemistry, physics, politics,
>>
>
> Are we? If you accept that humans are sub-optimal, then doesn't
> growth [really more importantly -- evolution] become much more
> important than survival?
>
Both individuals and the species cannot evolve/grow/get better
if the individual or the species no longer survives. Any
question of "better" or "higher value" must answer "better for
whom/what". There is no intrinsically "better". Many believe
that some forms of technology will be much "better" in terms of
some things we consider of great value like intelligence. But
intelligence alone is not necessarily "better" nor is it yet
certain that we will see this type of higher intelligence any
time soon. Do you now rank humans in such a way as to consider
those of higher IQ better than others of lesser IQ? Would you do
so if the range was much wider? Or do you recognize that there
is a range of values involved and that there is also worth and
value in human beings qua human being? If you do not then the
great danger is eventually siding with those who someday may
call for removing the excessive and relatively useless (to them)
inferior semi-sentient biomass. Avoidance of such dystopia
requires getting our values and their underpinnings well sorted
out long before such a time.
>>I really think that we can't emphasize enough the value of
>>humans/transhumans. It is from this position of our high value, that
>>we will/can address the problems of which technology can bring.
>>
>
> No argument. But as we have discussed many times we don't have
> a good solution for when "The needs of the gods outweigh the
> needs of the demi-gods, or the non-gods".
>
Perhaps often there will be such abundance and relative little
interleaving of needs and interests that the problem will not so
often come up. But there is a large need for recognition of the
rights of various types of sentient beings to exist and have
room to grow. Without this recognition there is only nature,
greatly empowered by technology, "red in tooth and claw". Is
this the best we can come up with?
> There was a good (repeat) episode about this on Stargate SG-9
> last night. A race of relatively primitive people (X) has been
> relocated to planet 'paradise'. Planet paradise is the only
> known planet with a Stargate that has the right atmospheric
> conditions for X to live. Alien spaceship shows up holding
> the remnants of civilization Y, a 10,000 year old civilization
> that has been wiped out. It starts terraforming planet paradise
> such that it can support the resurrection of civilization Y.
> Of course the terraforming resources on the ship can only be used once
> and planet paradise is the only planet out of millions examined
> capable of supporting Y. The terraforming process is completely
> deadly to people X and there isn't enough time to evacuate
> them even if they could find an suitable alternate planet.
>
> The dilema for the Stargate team is whether to sacrifice
> people X or attempt to destroy the spaceship, permanently
> extincting civilization Y. They discover an interesting
> solution to the problem which I will not disclose here.
>
> I don't think these types of moral dilemas are going to go
> away anytime soon. I think one has to come up with better
> arguments than saying 'a priori' the value of humans or
> even the rights of humans must be preserved indefinately.
> Anyone who has read any science fiction can presumably
> come up with more than a few cases where that simply
> doesn't seem to be justifiable.
>
The rights of humans are simply the requirements of their
survival and well-being. If you value humans you recognize
these rights. They are not separable. I don't think the
question of "justification of human existence" even comes up
unless their existence prevents the existence of something else
at minimum that is somehow "better". That scenario is quite
questionable. Who judges "better" and who judges when the
conflict does not have a solution where all groups survive and
thrive? One thing is certain. If we can't find good reasons
why an "inferior" species must survive if it is in the way of a
"superior" one then we certainly can't complain when a
supposedly superior species (natural or artificial) attempts to
destroy us.
Historically this sort of thinking has occurred between
different groups and "races" of humans. Do we think this type
of idea is more palatable and reasonable if the range of
difference between the groups of beings involved is
significantly larger than the relatively paltry difference
between groups of humans?
>
>>What values, ethics, politics enable that?
>>
>
> The same values and ethics (I'll leave politics out of the equation)
> that can rationally recognize when oneself (as an individual
> or a group or a species) is no longer relevant in the grand
> scheme of the universe.
>
Who the f**k says there is "a grand scheme to the universe"?
- samantha
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:05 MST