From: paul@i2.to
Date: Tue Aug 03 1999 - 14:00:20 MDT
On Tue, 03 August 1999, "Eliezer S. Yudkowsky" wrote:
> Let me get this straight. You're using the lousy behavior of humans as
> an argument *against* AI?
No, I'm using the lousy behavior and lack of ethics among many strong AI
researchers as an argument against AI. Now if Minsky wants to build
a goal system around somebody like Mahatma Ghandi, then I'll start
listening again.
> > The
> > assholes can barely behave themselves in the most controlled
> > social settings, and you want me to hedge my bets and my
> > life in the hands of this lot? Gimme a break!
>
> I think you are perhaps overgeneralizing just a tad. In fact, you've
> just insulted... let's see, Douglas Hofstadter, Douglas Lenat, and...
> well, actually that exhausts my store of people I'd challenge you to a
> duel over, but I'm still a touch offended
You shouldn't be offended, because I'm only speaking of the people
I have personally met. And I have never met Hofstadter or Lenat.
> > Eli, you want to create an SI before nanotech destroys
> > everything. People like Den Otter, Max More and myself want
> > to IA ourselves to singularity before the SI's destroy us!
>
> Why are you including Max More on the list? I have him down as an
> benovelent-Powers neutral in the IA-vs-AI debate, unless he's changed
> his mind lately.
Uhm, he quite clearly stated in an earlier post that regardless of the
benevolence or hostility of SI's, *we* better deal with them
from a position of strength either way. I asked him to clarify what
this position of strength was. but he has not responded. Now unless
he knows of another trick up his sleeve, this position of strength must
come from our intelligence enhancement. Until he responds otherwise,
he is clearly arguing for at minimum IA == AI.
> As for you and den Otter... won't you feel like *such* idiots if the
> Powers would have been benevolent but you manage to kill yourself with
> grey goo instead?
No I *won't* feel like an idiot! See, that is where you and I differ in
the extreme. I actually believe in my own ability to make the right
decisions. The very reason I am extropian in the first place is a strong
belief in evolving my *own* mind to become a power. I have esteem,
self-confidence and faith in my own abilities. You appear to have little
belief in yourself or others, which is why you seem obsessed with
having an SI carry on the torch of evolution instead. Their is no evidence
that destroying ourselves with goo is any more likely than SI's
destroying us with goo. Therefore, until proven otherwise,
I favor my own ethics and goals over the goals and ethics of anyone
else, human or SI.
> Anyway, a touch of IA and you guys will be following me into Externalist-land.
Huh?
> Actually, it's 2015 vs. 2020, although Drexler's been talking about
> 2012, so I'd figured on 2010 for an attempted target date and 2005 for
> the earliest possible target date.
>
> Am I less of a fool? Yes, I'm ten years less of a fool. I'm the
> absolute minimum of fool that I can possibly manage.
You have yet to prove any of this. These dates are
still conjecture. The only place we agree, is that unless there
are breakthroughs in unforeseen directions, the sequence of technological
possibilities are the same. Unlike many extropians I don't believe
fervently in the dictum that if we can do something we should so
something - and that includes conscious AI's. If it is even remotely
possible to evolve our own intelligence at the same speed as an AI,
then we should so so. Bottom line, I know and trust my goals. They
exist today and are tangible. If you want a larger audience, I suggest you
start studying ethics! Until I see a strong ethical basis among AI researchers
I will remain steadfastly opposed to them.
Paul Hughes
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:38 MST