From: Warrl kyree Tale'sedrin (warrl@mail.blarg.net)
Date: Sun May 03 1998 - 06:27:28 MDT
> From: Paul Hughes <planetp@aci.net>
> Dan Fabulich wrote:
>
> Moore's Law continues unabated, probabilistically yielding human-level AI
> within 20 years.
Moore's Law will *never* (pragmatically never) yield human-level AI.
It will probably yield the processing power necessary to support such
an AI; but no matter how many times the price/performance ratio of
microcircuitry doubles, that doubling produces no software.
I suppose that in theory if you left a sufficient number of computers
capable (hardware-wise) of human-level AI lying around long enough,
eventually one of them would be programmed to such an AI by the
random influence of radiation; but I don't consider this an
acceptably efficient programming technique.
Because of scalability it is also reasonable to expect
> human-level AI before circuit density reaches levels equivalent to the human
> brain. It's also reasonable that transhumans will gain access to substantial
> improvements in neuro-enhacement and neuro-interface technology over the same
> time period.
>
> Like cells in a muti-cellular organism, the internet is already allowing
> multiple groups of humans to coordinate across geographic boundaries at a level
> of coherence and complexity never before possible. As networks, interface
> software, virtual worlds and bandwidth improve, this trend can only continue.
> When you add in the slow but steady improvement of neuro-enhancement
> technologies, forthcoming 3rd and 4th generation smart drugs, wearable
> computers, implantable interface technologies, personality software
> agents/avatars, the human becomes transhuman on a scale never before
> achievable. More importantly these transhumans will be able to coordinate and
> collective act in multi-faceted spontaneous networks mimicking a collective,
> synergistic intelligence much greater than any individual transhuman.
>
> As this trend continues, computer intelligence will be continually increasing.
> Up and until human-level AI is achieved, there is no reason why transhumans
> cannot integrate these quasi-sentient AI's into their own intelligence
> networks.
>
> At some point Human-level AI's are built. Lets assume that they immediately
> organize themselves around the sole purpose of taking over the world. At first
> they will be small in number. Certainly not near the the number of their
> human/transhuman counter-parts also attempting to rule the world. Their goal
> of course will be two-fold. To increase their own intelligence and to create
> as many copies of themselves as possible. But to increase their own
> intelligence they will need to do more than simply re-write their sofware.
> They will also have to improve their hardware substrate.
>
> At some point both transhumans and Hyper-AI's will have to utilize
> nanotechnology in their evolution towards greater complexity and intelligence.
>
> The question is who will reach what phases at what time? And will the combined
> forces of networked enhanced trans-humans be able to maintain a greater degree
> of collective intelligence over networked AI's until uploading is reached?
>
> I think the answer to this question is far from being answered.
>
> Assuming transhumans can become post-human in similar nanotech substrates at or
> before super-AI's, the war between Hyper-AI's and Transhumans becomes moot.
> Because at that point they will be us and we will be them - we will be made of
> the same underlying nanotechnology.
>
> Comments, critiques?
>
>
> Paul Hughes
> planetp@aci.net
> http://www.aci.net/planetp
>
>
>
>
US$500 fee for receipt of unsolicited commercial email. USC 47.5.II.227
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:02 MST