From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Fri Dec 09 2005 - 07:54:41 MST
This proposal seems to me rather like Nick Bostrom's
preffered option of building an 'Orcale'. This isn't
as easy as it looks, even if you start from the position
that it's pretty damn hard (sticking with the AGI norm
then :) ), but I've been studying ways to do it and it
does still look to me much easier than building any of
Yudkowsky's FAI proposals.
An Oracle is of course still a tremendously dangerous
thing to have around. Ask it for any kind of plan of
action and you are allowing it to optimise reality to
the maximum degree permitted by using you as intermediary,
in effect bringing up a marginally diluted version of the
'understanding intent' problem without any efforts to
directly solve it (e.g. the way EV attempts to). I must
reluctently classify Nick Bostrom's proposal to make an
Oracle generally available (or at least, publically known
and available to experts) as hopelessly naive. Clearly
there is vast potential for misuse and abuse that would
be unavoidable if publically known, at least in the short
space of time before some fool asks one how to build a
seed AI that will help them with their personal goals. It
does seem likely to me that an Orcale built and used in
secret, by sufficiently moral and cautious researchers,
would be a net reduction in risk for an FAI project.
> That would never happen. For the AI to give an order it
> would have to have a goal system. Passive AI does NOT
> have a goal system.
What you mean by 'goal system', and what I would classify
a 'goal system', are almost certainly very different. I
would say that any system that consistently achieves
something specific (including answering questions) has at
least an implicit goal system, and that if you want to
make sure that an AGI-class system won't optimise parts of
the world that you want it to leave alone (i.e. actually
act like an Oracle) then you /must/ analyse it in
goal-seeking terms.
> Various other parts do various things. The important
> thing is that only the “wanting” part can initiate action
The reasoning components have the goal of providing accurate
information about unseen parts of the external world. This
goal won't avoid optimising the external world unless you
explicitly constrain it thus. It may still make sense to
turn the world into computronium to solve some abstract
problem, or to optimise the world to make the answer easier
to compute, or just to perform experiments to gain useful
experimental data. Any attempt to rule all this out has to
be stable under self-modification, as you have to use seed
AI to get Oracle-grade predictive capability, which means
you still have to solve some basic structural FAI issues
before you can do anything useful safely.
> Now, we interface his brain with a computer such that we
> could send him “will” thoughts via electric pulses.
Humans are a bad example for this because utility and
calibration are (near-)hopelessly entangled and conflated
in human thinking (a fact reflected by all those AGI designs
that implicitly use a single 'activation' parameter to
represent both). But your point stands if we imagine a
Platonic 'completely rational' human.
> As you can see, he is still quite useful. I can browse his
> knowledge and get various insights from him. However, Mr.
> A is completely passive. He doesn’t want ANYTHING.
This kind of thing would work just fine for say a classic
symbolic AI reasoning system, but I'm pretty sure you can't
get away with anything so simplistic for any system capable
of AGI (even without considering the 'stability under
self-modification' issue).
* Michael Wilson
___________________________________________________________
How much free photo storage do you get? Store your holiday
snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT