From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Aug 27 2001 - 14:47:29 MDT
Jimmy Wales wrote:
>
> Eliezer S. Yudkowsky wrote:
> >
> > Neither I nor Dani Eder will be able to offer experimental support for the
> > position "fame has independent hardware" or "fame is internally
> > represented as a subgoal" because neither neuroimaging nor neuropathology
> > has reached that level of granularity.
>
> Right. So it's interesting, but... :-)
My point is that it is perfectly possible to operate in the absence of
experimental support or definite proof. It is simply more dangerous. If
you're casting around for something that makes a good grant proposal and
is easy to defend in front of a committee, you can afford to stay in
territory that is mostly but not completely known. When I need to know
something, I need to know it right away, whether or not neurology has
caught up with me yet, and I can't just go investigate something else
instead.
In this case, whether fame is an independent drive is important mostly to
internal rationality and not to AI-related matters. But if it was
important to AI-related matters, and neurology hadn't come up with a
definite answer yet, then I would have to make my best guess based on
prior knowledge. There's nothing unscientific about that. The rules
don't say that you aren't allowed to guess, or that guesses are
worthless. The rules just say that guesses have to be based on what you
do know, and that the guess must yield to experimental evidence when the
evidence becomes available.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT