From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Fri May 23 2003 - 10:16:44 MDT
On Tue, 20 May 2003, Brian Atkins wrote:
> . . .
> > If humans can design AIs smarter than humans, then humans
> > can regulate AIs smarter than humans.
>
> Just because a human can design some seed AI code that grows into a SI
> does not imply that humans or human-level AIs can successfully
> "regulate" grown SIs.
The regulation is not intended to trace the thoughts
and development of the SI. The inspection is of the
design, not the changing contents of its mind. If it's
initial reinforcement values are for human happiness,
and its simulation and reinforcement learning
algorithms are accurate, then we can trust the way it
will develop. In an earlier email I made the analogy
to game playing programs. If their game simulation
and learning algorithms are accurate and efficient,
and their reinforcement learning values are for winning
the game, then although the details of their play are
not predictable, the fact that they will play to win
is predictable.
The first SI will be designed and educated by humans.
Humans will be able to understand and regulate its
design, and regulate how it is educated. This will
create trusted safe SIs. They can then design and
regulate improved SIs, with one independently
designed SI inspecting the designs of another.
> > It is not necessary
> > to trace an AI's thoughts in detail, just to understand
> > the mechanisms of its thoughts. Furthermore, once trusted
> > AIs are available, they can take over the details of
> > design and regulation. I would trust an AI with
> > reinforcement values for human happiness more than I
> > would trust any individual human.
> >
> > This is a bit like the experience of people who write
> > game playing programs that they cannot beat. All the
> > programmer needs to know is that the logic for
> > simulating the game and for reinforcement learning are
> > accurate and efficient, and that the reinforcement
> > values are for winning the game
> >
> > You say "by your design the 'good AIs' will be crippled
> > by only allowing them very slow intelligence/power
> > increases due to the massive stifling human-speed". But
> > once we have trusted AIs, they can take over the details
> > of designing and regulating other AIs.
>
> Well perhaps I misunderstood you on this point. So it's perfectly ok
> with you if the very first "trusted AI" turns around and says: "Ok, I
> have determined that in order to best fulfill my goal system I need to
> build a large nanocomputing system over the next two weeks, and then
> proceed to thoroughly redesign myself to boost my intelligence 1000000x
> by next month. And then, I plan to take over root access to all the nuke
> control systems on the planet, construct a fully robotic nanotech
> research lab, and spawn off about a million copies of myself."? If
> you're ok with that (or whatever it outputs), then I can withdraw my
> quote above. I fully agree with you that letting a properly designed and
> tested FAI do what it needs to do, as fast as it wants to do it, is the
> safest and most rational course of action.
For me, a trusted safe AI is one whose reinforcement
values are for human happiness. The behavior you describe
would make people unhappy, and therefore would not be
learned. The thing about using human happiness as a
reinforcement value is keeping humans "in the loop" of
the AI's thinking, no matter how intelligent it becomes.
> Now you also still haven't answered to my satisfaction my objections
> that the system will never get built due to multiple political, cost,
> and feasibility issues.
I'll grant that the process will be very complex and
politically messy. There will certainly be a strong urge
to build AI, because of the promise of wealth without work.
But when machines start suprising people with their
intelligence, the public be reminded of the fears raised
by science fiction books and movies. Once the public is
excited, the politicians will get excited and turn to
experts (it is encouraging that Ray Kurzweil has already
testified before congress about machine intelligence).
There will be conflicting opinions among the experts.
Among the public there will also be conflicting opinions,
as well as lots of crazy opinions. This will all create a
very raucous political situation, a good example of the
old line that its not pretty to watch balony and
legislation being made. Nevertheless, in the end it is
this public and democratic political process that we
should all trust best (if we've learned the lessons of
history).
I don't see cost as a show-stopper. The world is pouring
huge resources into advancing technology. Regulation will
have its costs, but I don't see them making the whole
project infeasible. Embedding one inspector per designer
would roughly double costs, nine inspectors per designer
(that's probably too many) would multiply costs by ten.
These don't make the project infeasible. The singularity
is one project where we don't want to cut corners for cost.
> . . .
> > Powerful people and institutions will try to manipulate
> > the singularity to preserve and enhance their interests.
> > Any strategy for safe AI must try to counter this threat.
> >
>
> Certainly, and we argue the best way is to speed up the progress of the
> well-meaning projects in order to win that race.
>
> Your plan seems to want to slow down the well-meaning projects, because
> out of all AGI projects they are the most likely to willingly go along
> with such forms of regulation. This looks to many of us here as if you
> are going out of your way to help the "powerful people and institutions"
> get a better shot at winning this race. Such people and institutions are
> the ones who have demonstrated time and time again throughout history
> that they will go through loopholes, work around the regulatory bodies,
> and generally use whatever means needed in order to advance their goals.
> Again, to most of us, it just looks like pure naivete on your part.
The key word here is "well-meaning". Who determines that?
I only trust the public to determine that, via a
democratically elected government.
The other problem is thinking that you can help a
"well-meaning" project win the race. Without the force
of law to deter them, there are going to be some *very*
well financed projects developing unsafe AI.
For all the details that need to be worked out in the
approach of regulation by democratic government, it is
still far better than trusting the "well-meaning"
intentions of some particular project, and trusting
that it will win the race to develop AI first.
The "naivete" is thinking that the wealthy and
powerful won't understand that super-intelligence
will have the power to rule the world, or that they
won't try to get control over it, or that the folks
in the SIAI are so smart that they will overcome a
million to one disparity in resources. The only hope
is to get the public on our side.
> . . .
> Those weren't the point. The reason I brought up the
> UFAI-invents-nanotech possibility is that you didn't seem to be
> considering such unconventional/undetectable threats when you said:
>
> "But for an unsafe AI to pose a real
> threat it must have power in the world, meaning either control
> over significant weapons (including things like 767s), or access
> to significant numbers of humans. But having such power in the
> world will make the AI detectable, so that it can be inspected
> to determine whether it conforms to safety regulations."
>
> When I brought up the idea that UFAIs could develop threats that were
> undetectable/unstoppable, thereby rendering your detection plan
> unrealistic, you appeared to miss the point because you did not respond
> to my objection. Instead you seemed on one hand to say that "it is far
> from a sure thing" and on the other hand that apparently you are quite
> sure that humans will already have detection networks built for any type
> of threat an UFAI can dream up (highly unlikely IMO). Neither are good
> answers to how your plan deals with possibly undetectable UFAI threats.
I never said I was "quite sure that humans will already have
detection networks built for any type of threat an UFAI can
dream up". I admit the words you quoted by me are more
optimistic than I really intended. What I really should say
is that democratic government, for all its faults, has the
best track record of protecting general human interests. So
it is the democratic political process that I trust to cope
with the dangers of the singularity.
> > The way to counter the threat of micro-organisms has been
> > detection networks, isolation of affected people and
> > regions, and urgent efforts to analyze the organisms and
> > find counter measures. There are also efforts to monitor
> > the humans with the knowledge to create new micro-organisms.
> > These measures all have the force of law and the resources
> > of government behind them. Similar measures will apply to
> > the threat of nanotech. When safe AIs are available, they
> > will certainly be enlisted to help. With such huge threats
> > as nanotech the pipe dream is to think that they can be
> > countered without the force of law and the resources of
> > government. Or to think that government won't get involved.
>
> Oh, I don't disagree that some form of "government" will be required, I
> just think it will be a post-Singularity form of governance that will
> have no relation to our current system.
I agree that the singularity will cause a complete change
in form of government. I see AIs with values for human
happiness, that keep humans "in the loop" of their thought
processes, as the natural evolution of democracy.
But there will still be traditional government as the
signularity starts to develop, and that will be the
critical time for determining what kind of singularity
we get.
> At any rate, I believe you will grant me my point that "safe AIs" can
> only defend us if they stay ahead of any possible UFAI's intelligence level.
Sure. I see the force of law as a way to slow down those
who want to develop unsafe AIs, and the resources of
government as a way to help the development of safe AI.
----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
test@demedici.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT