From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Sep 05 2000 - 12:31:06 MDT
Samantha Atkins wrote:
>
> gabriel C wrote:
> >
> > I do not believe that we as singularitarians have the option of being "of
> > two minds". It is wise to have a perspective on all things you encounter in
> > your life, but this is far beyond the pale of ordinary things. Plus, we
> > can't afford the time or divisive mindset this might bring about. If we
> > doubt the work in front of us, the Singularity might not be reached before
> > we as a race destroy ourselves.
>
> We need to second guess ourselves quite a
> lot I would think about whether what we are doing is the right thing,
> about whether our methods lead to the right results, about our abilities
> and about our reasons for being on this quest.
I also have to disagree, at least with Gabriel's phrasing. That what we do is
so incredibly important does not excuse us from critical thinking; rather, it
makes self-questioning mandatory. But only on the level of beliefs, not of
actions. We cannot afford to be certain, or to blunt our own perceptions, but
we cannot afford to be hesitant either. We certainly cannot afford to
ostentatiously pretend to be more uncertain then we really are, for the sake
of transhumanist correctness.
I find that it is possible to easily blend fanaticism and uncertainty - the
fact that the probability of an assertion is not 100% doesn't have to
automatically sequit to lessened enthusiasm, so long as the particular course
of action is still rationally the best available.
> Another possible way is to condition, evangelize, teach
> enough of humanity at least at enough of the key positions a different
> way of seeing life and what they are doing such that the threat of
> self-inflicted global catastrophe goes down.
This is improbable to the point of vanishment. At least AI is an engineering
project. You're talking about bringing about a basic change in the nature of
all humanity, enough so that NOBODY starts a nanowar, and doing it without
ultratechnology and without violating ethics. I just don't see this
happening. It looks to me like if you started a massive project with a
hundred billion dollars worth of funding, intended to educate humanity, you
might become a big media sensation but you would not accomplish enough to save
the world. Not even close.
> It just goes against my programming, it seems a bit like walking
> away from the human race to do the alchemistic thing of huddling
> together with a some like minded folks in front of our supercomputers
> attempting to give birth to a god.
Yes, this is pretty much what we're doing. Humanity can sort itself out
afterwards, after we're no longer tap-dancing across a minefield and we have
some decent tools.
The Universe would be a great place if there was a human way to solve this
human problem. There isn't, and we have to resort to AI. That's pretty much
all there is to it.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT