From: Paul Hughes (planetp@aci.net)
Date: Sun May 03 1998 - 03:30:56 MDT
Dan Fabulich wrote:
> You said: "I also think that the question of whether humans should give
> rights to machines is moot, the question of whether machines will give
> rights to humans is not." Why do you think the question of whether
> machines will give rights to humans is an interesting question, if we can't
> stop the creation of hyper-intelligent robots, and stand no chance of
> controlling them once we do? If they do decide to give us rights, then
> they will. If they don't, then they won't. We have no say in the matter
> and not enough intelligence to predict what a hyper-intelligent being would
> actually do. So why ISN'T this question moot?
I think the underlying assumptions of this argument are suspect. The whole
issue hinges on the notion of hyper-AI's having a greater intelligence
(political?) leverage than transhumans. I think this conclusion is premature
based on an ambiguous dichotomy and several competing trends.
Moore's Law continues unabated, probabilistically yielding human-level AI
within 20 years. Because of scalability it is also reasonable to expect
human-level AI before circuit density reaches levels equivalent to the human
brain. It's also reasonable that transhumans will gain access to substantial
improvements in neuro-enhacement and neuro-interface technology over the same
time period.
Like cells in a muti-cellular organism, the internet is already allowing
multiple groups of humans to coordinate across geographic boundaries at a level
of coherence and complexity never before possible. As networks, interface
software, virtual worlds and bandwidth improve, this trend can only continue.
When you add in the slow but steady improvement of neuro-enhancement
technologies, forthcoming 3rd and 4th generation smart drugs, wearable
computers, implantable interface technologies, personality software
agents/avatars, the human becomes transhuman on a scale never before
achievable. More importantly these transhumans will be able to coordinate and
collective act in multi-faceted spontaneous networks mimicking a collective,
synergistic intelligence much greater than any individual transhuman.
As this trend continues, computer intelligence will be continually increasing.
Up and until human-level AI is achieved, there is no reason why transhumans
cannot integrate these quasi-sentient AI's into their own intelligence
networks.
At some point Human-level AI's are built. Lets assume that they immediately
organize themselves around the sole purpose of taking over the world. At first
they will be small in number. Certainly not near the the number of their
human/transhuman counter-parts also attempting to rule the world. Their goal
of course will be two-fold. To increase their own intelligence and to create
as many copies of themselves as possible. But to increase their own
intelligence they will need to do more than simply re-write their sofware.
They will also have to improve their hardware substrate.
At some point both transhumans and Hyper-AI's will have to utilize
nanotechnology in their evolution towards greater complexity and intelligence.
The question is who will reach what phases at what time? And will the combined
forces of networked enhanced trans-humans be able to maintain a greater degree
of collective intelligence over networked AI's until uploading is reached?
I think the answer to this question is far from being answered.
Assuming transhumans can become post-human in similar nanotech substrates at or
before super-AI's, the war between Hyper-AI's and Transhumans becomes moot.
Because at that point they will be us and we will be them - we will be made of
the same underlying nanotechnology.
Comments, critiques?
Paul Hughes
planetp@aci.net
http://www.aci.net/planetp
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:02 MST