From: Michael Anissimov (michaelanissimov@gmail.com)
Date: Thu Aug 10 2006 - 15:41:55 MDT
Jef,
> The concept of "the human utility function" is increasingly invalid as
> individual human agents and their environment become increasingly
> complex. While we can usefully refer to certain "universal human
> values", such as the strong desire to protect and advance one's
> offspring, even those are contingent. More fundamental principles of
> synergetic cooperation and growth provide a more effective and
> persistent basis for future "moral" decision-making.
The phrase "human utility function" simply implies, in a broad sense,
that our desires are not random. Anything that isn't purely random
has some kind of statistical structure, as our utility function does.
It doesn't mean that we're trying to reduce all human desires to a set
of equations and hand them to an AI that monolithically enforces them
for all eternity. Contingencies, synergetic cooperation, growth,
etc., are all forms of moral information content that we want to pass
on to our species-children (AI). We have to pass *something* on, and
we have to pass it on in the form of algorithms, although these
algorithms can be very open-ended and evolve over time.
> Your statement "some we might want a superintelligence to maximize..."
> obscures the problem of promoting human values with the presumption
> that "a superintelligence" is a necessary part of the solution. It
> would be clearer and more conducive to an accurate description of the
> problem to say "some values we may wish to maximize, others to
> satisfice."
There is no "final solution" to the problem of morality, most likely.
If there is, we may discover it in millions of years. A
superintelligence isn't meant to be a "solution to morality" per se.
It's just that, with computers and theory of AI being what it is, we
will eventually have to face superintelligence for sure, so whatever
solution we come up with to having a Nice Place to Live after
superintelligence arrives will inevitably involve whatever initial
conditions we decide to put into it.
I'm not sure we want to maximize any values, because then by
definition they would override all others, unless their maximization
still leaves many degrees of freedom. In general, when something is
maximized by a truckload of optimization pressure, it reduces the
possible degrees of freedom for future states. In a certain sense, we
*want* this - "degrees of freedom" in the thermodynamic sense is
radically different than "degrees of freedom" in the sociopolitical
sense.
> If we were to further abstract the problem statement, we might arrive
> at something like recognition of every agent's desire to promote its
> (evolving) values over increasing scope. This subsumes the preceding
> dichotomy between maximizing and satisficing with the realization that
> each mode is effective within its particular limited context toward
> promoting growth in the larger context.
Sure, maximizing or satisficing subgoals can be useful towards
maximizing some supergoal. Don't be misled by the term "supergoal"
here - it can be an exabytes-large utility function that encourages
growth wholeheartedly while discouraging destruction, mayhem, torture,
or what have you.
> Given the preceding problem statement, it becomes obvious that the
> solution requires two fundamental components: (1) increasing
> awareness of our values (those which are increasingly shared because
> they work (passing the test of competition within a coevolutionary
> environment), and (2) increasing awareness of principles of action
> that effectively promote our values (this is our increasingly
> subjective scientific/instrumental knowledge.)
Sure. These are things that would flow naturally from the correct
implementation of a CEV model. Our smarter selves would want us to
increase awareness of our values and principles of action that
effectively promote those values.
> Note that this approach is inherently evolutionary. There is no
> static solution to the moral problem within a coevolutionary scenario.
> But there are increasingly effective principles of what works to
> maximize the growth of what we increasingly see as increasingly good.
No one ever suggested a static solution. Only the Three Laws are
static. The CFAI model is not static (have you read it?) The CEV
model is also by no means static. The post-CEV theorizing that goes
on today is not static either. (Not to imply that CEV is being
ditched, just that more theorizing is going on after the document was
originally penned.)
> Back to the presumption of "a superintelligence." This phrasing
> implies an independent entity and reflects the common assumption that
> we must turn to an intelligence greater than us, and separate from us,
> to save us from our critical problems. Such a concept resonates
> deeply within us and our culture but the concept is flawed. We are
> conditioned to expect that a greater entity (our parents?, our god?)
> will know what is best and act in our interests.
We are all operating on the assumption that when we build a
smarter-than-human intelligence, it will be capable of rapidly
bootstrapping to superintelligence, no matter what we do. This is
because of the smartness effect, the ability to integrate new
cognitive hardware, run cognitive processes very quickly, etc. If you
believe that we will build an AI and it will remain in the same
general intelligence/ability range as ourselves, then this entire
discussion is moot.
We aren't grabbing towards a higher intelligence to solve our problems
because we want our daddies. We are phrasing these discussions in
terms of superintelligence because we see the abrupt emergence of
superintelligence as *inevitable*. Again, if you don't, this whole
discussion is close to pointless. Sure, if you don't believe that
superintelligence is around the corner, you will put everything
towards working with what we have today - humans.
> It's time for humanity to grow up and begin taking full responsibility
> for ourselves and our way forward. We can and will do that when we
> are ready to implement a framework for #1 and #2 above. It will
> likely begin as a platform for social decision-making optimizing
> objective outcomes based on subjective values in the entertainment
> domain, and then when people begin to recognize its effectiveness, it
> may extend to address what we currently think of as political issues.
We can do all this, and then someone builds a superintelligence that
maximizes for paperclips, and we all die. Looks like we should've
been working on goal content for that first seed AI, huh?
> You were right to refer to a superintelligence, but that
> superintelligence will not be one separate from humanity. It will be
> a superintelligence made up of humanity.
Yes, Global Brain, metaman, etc. This is all well and good, but a
community of chimps networked together with the best computing
technology and decision systems does not make a human. We are talking
about building something godlike, so it doesn't make sense to refer to
it in the same way we refer to humans, no more than it makes sense to
talk about chimps in the same way we talk about humans.
Singularitarians believe in the technological feasibility of something
- building a recursively self-improving optimization process that
starts from a buildable software program. Our arguments for the speed
and intensity of the self-improvement process come from cognitive
science and comparisons of the relative advantages of humans and AIs.
(http://www.acceleratingfuture.com/articles/relativeadvantages.htm)
You obviously dont believe that the self-improvement process in AIs
will play out at this speed, otherwise we would be on the same page.
We are talking superior intelligence capable of developing
nanotechnology and whatever comes after that, with the resources to
rip apart this planet in seconds, minutes, hours, whatever. That's
what we consider to be the natural consequence once a seed AI really
gets going. The challenge is specifying its initial motivations in
such a way that it values us specifically and contributes to what we
see as growth rather than what it sees as growth, which could be
anything if we don't program it right.
Do you agree with the point of view and points argued in this article?
http://www.nickbostrom.com/ethics/ai.html
If not, we are talking past each other, in this context.
-- Michael Anissimov Lifeboat Foundation http://lifeboat.com http://acceleratingfuture.com/michael/blog
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT