Re: news spin on cryonics

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jul 13 2002 - 09:22:07 MDT


Harvey Newstrom wrote:
>
> I often wonder this myself. I know that we hope the masses will provide
> public opinion support and monetary support for our goals, but I don't
> really envision this happening. There seems to be a mindset that is
> very engrained in people to accept or not accept our ideas. I really
> don't know if people can be convinced if they aren't already.

Meaning no offense to you or anyone else on the list, I think that if
you are wondering how to persuade "the masses" then you are not likely
to do so. I am wondering how to show the rest of humanity what's really
going on so that they can help out, even though I accept that not all
people may choose to do so. That is part of what makes me a crusader.
I wouldn't necessarily call myself a competent one. But what you need,
at this point, is a competent crusader, and crusaders identify with
Everyone Else, what you're calling "the masses". (By nature - because
that's how we think - not because we're trying to be persuasive.)

> Existing scientists are trying to figure out how to make nanotubules
> more than a few millimeters. We are trying to figure out how to finance
> a space mission to our nearest neighbors. We are trying to figure out
> what the next step of computer will be now that we've revamped the
> Pentium chip a dozen times without getting new designs acceptable to the
> public. We can't even get our laptops to stop crashing every day, yet
> we are often distracted on the "pressing" problems of how to keep an AI
> friendly, how to govern space colonies, and how society will react to
> duplicates.

I agree with all of this except the part about Friendly AI. If you
don't have your thoughts organized in advance on that one, you shouldn't
be in the business. I mean this very seriously. If you don't know what
you're going to do about that, you shouldn't proceed forward in the
hopes that things will just sort themselves out for you. Of course the
people who leave things up in the air like that are also likely to press
ahead with inadequately defined ideas of AI as well, thus making this
less of a problem than in might appear. Nonetheless I seriously say
that if you're planning to go ahead in AI, you should know in advance
what you're planning to do about Friendliness.

I don't see why there's any reason for despair that "existing
scientists" are doing neat stuff. Good for them. It decreases the
amount of stuff that we need to do. Humanity is moving forward; hooray.
  I would like to see the last mile to the Singularity (whether in human
enhancement or AI creation) carried out by people who get the
Singularity and who get the moral responsibilities involved, but for
everything short of that last mile, I say more power to whoever's doing it.

There *is* a place among humanity for pure dreaming. And there's also a
place for translating it into action. But it doesn't mean that the
dreaming can't go on.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:15:23 MST