From: Brian Atkins (brian@posthuman.com)
Date: Sun May 18 2003 - 18:40:16 MDT
Bill Hibbard wrote:
> On Sat, 17 May 2003, Brian Atkins wrote:
>>Bill Hibbard wrote:
>>>The danger of outlaws will increase as the technology for
>>>intelligent artifacts becomes easier. But as time passes we
>>>will also have the help of safe AIs to help detect and
>>>inspect other AIs.
>>>
>>
>>Even in such fictional books as Neuromancer, we see that such Turing
>>Police do not function well enough to stop a determined superior
>>intelligence. Realistically, such a police force will only have any real
>>chance of success at all if we have a very transparent society... it
>>would require societal changes on a very grand scale, and not just in
>>one country. It all seems rather unlikely... I think we need to focus on
>>solutions that have a chance at actual implementation.
>
>
> I never said that safe AI is a sure thing. It will require
> a broad political movement that is successful in electoral
> politics. It will require whatever commitment and resources
> are needed to regulate AIs. It will require the patience to
> not rush.
Bill, I'll just come out and state my opinion that what you are
describing is a pipe dream. I see no way that the things you speak of
have any chance of happening within the next few decades. Governments
won't even spend money on properly tracking potential asteroid threats,
and you honestly believe they will commit to the VAST amount of both
political willpower and real world resource expenditures required to
implement an AI detection and inspection system that has even a low
percentage shot at actually accomplishing anything?
And that is not even getting into the fact that by your design the "good
AIs" will be crippled by only allowing them very slow intelligence/power
increases due to the massive stifling human-speed
design/inspection/control regime... they will have zero chance to
scale/keep up as computing power further spreads and enables vastly more
powerful uncontrolled UFAIs to begin popping up. The result is seemingly
a virtual guarantee that eventually an UFAI will get out of control (as
you state, your plan is not a "sure thing") and easily "win" over the
regulated other AIs in existence. So what does it accomplish in the end,
other than eliminating any chance that a "regulated AI" could "win"?
Finally, how does your human-centric regulation and design system cope
with AIs that need to grow to be smarter than human? Are you proposing
to simply keep them limited indefinitely to this level of intelligence,
or will the "trusted" AIs themselves eventually take over the process of
writing design specs and inspecting each other?
>
> By pointing out all these difficulties you are helping
> me make my case about the flaws in the SIAI friendliness
> analysis, which simply dismisses the importance of
> politics and regulation in eliminating unsafe AI.
>
This is a rather nonsensical mantra... everyone is pointing out the
obvious flaws in your system- this does not help your idea that politics
and regulation are important pieces to the solution of this problem.
Tip: drop the mantras, and actually come up with some plausible answers
to the objections being raised.
SIAI's analysis, as already explained by Eliezer, is not attempting at
all to completely eliminate the possibility of UFAI. As he said, we
don't expect to be able to have any control over someone who sets out to
deliberately construct such an UFAI, and we admit this reality rather
than attempt to concoct world-spanning pipe dreams.
P.S. You completely missed my point on the nanotech... I was suggesting
a smart enough UFAI could develop in secret some working nanotech long
before humans have even figured out how to do such things. There would
be no human nanotech defense system. Or, even if you believe that the
sequence of technology development will give humans molecular nanotech
before AI, my point still stands that a smart enough UFAI will ALWAYS be
able to do something that we have not prepared for. The only way to
defend against a malevolent superior intelligence in the wild is to be
(or have working for you) yourself an even more superior intelligence.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT