From: Olie Lamb (neomorphy@gmail.com)
Date: Sun Sep 10 2006 - 20:53:06 MDT
On 9/11/06, Robin Hanson <rhanson@gmu.edu> wrote:
>
>
> There is an obvious selection effect working here - those who see AI as
> being easier (Than uploading) are more likely to see it as coming sooner,
> coming more
> suddenly, and all else equal being more powerful and important.
Borrowing a leaf from a renowned war-profiteer, I would put it this way:
With human-brain modelling, we have a fairly clear idea of what we don't
know.
Provided there aren't any big surprises, such as Penrose-quantum-magic
effects, modelling a human brain is simply a matter of mapping and
simulating. Difficult, but fairly straightforward.
With ground-up intelligence, we don't know how complex the "solution" needs
to be.
Some might suspect that there's a "silver bullet" to ground-up AGI... we
can't say for sure what is required, until we have one up and running. The
unknown unknown might be tricky but relatively simple (could potentially
happen soon), or it might be tricky _and_ relatively difficult.
Hence, any "guess" about when ground-up AGI might arrive is _necessarily_
predicated on a "guess" about how difficult it is. You can't correct for
that.
Analogy: a historical mathematician attempting to predict (A) when will
someone solve problems like "when will we know the first 1000 perfect
numbers?" against (B) "when will someone prove Fermat's last theorem?"
--Olie
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT