From: Samantha Atkins (samantha@objectent.com)
Date: Sat May 22 2004 - 02:06:56 MDT
On May 21, 2004, at 1:48 PM, Eliezer Yudkowsky wrote:
> Samantha Atkins wrote:
>> On May 19, 2004, at 3:56 PM, Eliezer S. Yudkowsky wrote:
>>> Similarly, FAI doesn't require that I understand an existing
>>> biological system, or that I understand an arbitrarily selected
>>> nonhuman system, but that I build a system with the property of
>>> understandability. Or to be more precise, that I build an
>>> understandable system with the property of predictable
>>> niceness/Friendliness, for a well-specified abstract predicate
>>> thereof. Just *any* system that's understandable wouldn't be
>>> enough.
>> You propose to give this system constrained to be understandable by
>> yourself the power to control the immediate space-time area in
>> service of its understandable goals? That is a lot of power to hand
>> something that is not really a mind or particularly self-aware or
>> reflective.
>
> Completely reflective, and not self-aware in the sense of that which
> we refer to as "conscious experience". (Remember, this may look like
> a mysterious question, but there is no such thing as a mysterious
> answer.)
You need to be more clear on the ways this FAI is not a "minded" or
"conscious being". You say it isn't and then imply you have been
misunderstood if anyone asks why this is something that should
rule/control/guide human space (and perhaps more). You seem to say
you are making it optimized for only a few purposes (much more
tractable I agree) and have your acceptance/shaping criteria trim away
much that looks like something else. That sounds like a rather
constrained intellect (if we may call it that).
Since the goal is Friendliness I don't see how it could be too
constrained safely. It at least requires enough degrees of freedom to
understand human longings and behavior in order to be Friendly.
Doesn't it?
>
>> If I understand you correctly I am not at all sure I can support such
>> a project. It smacks of a glorified all-powerful mindless coercion
>> for "our own good".
>
> Yes, I understand the danger here. But Samantha, I'm not sure I'm
> ready to be a father. I think I know how to redirect futures, deploy
> huge amounts of what I would consider to be intelligence and what I
> would cautiously call "optimization pressures" for the sake of
> avoiding conversational ambiguity. But I'm still fathoming the
> reasons why humans think they have conscious experiences, and the
> foundations of fun, and the answers to the moral questions implicit in
> myself. I feel myself lacking in the knowledge, and the surety of
> knowledge, needed to create a new sentient species. And I wistfully
> wish that all humankind should have a voice in such a decision, the
> creation of humanity's first child. And I wonder if it is a thing we
> would regard as a loss of destiny, to be rescued from our present
> crisis by a true sentient mind vastly superior to ourselves in both
> intelligence and morality, rather than a powerful optimization process
> bound to the collective volition of humankind. There's a difference
> between being manifesting the superposed extrapolation of the
> decisions humankind would prefer given sufficient intelligence, and
> being rescued by an actual parent.
Yes. So perhaps we should all grow up first. Perhaps we need to
augment humanity a step at a time into being more intelligent,
rational, compassionate, self-aware. Human nature/capability is a
good starting point quite a ways ahead of any AI we have in hand or
near term available in very important respects. If you want something
that humanity has a voice in and that fulfills humanities purpose most
efficiently then why not start with and build upon humanity itself?
Let our AI skills in the beginning go to directly extending human
abilities and reach.
Perhaps the only thing more insanely terrifying than contemplating
becoming a god is contemplating building one from scratch.
>
> If I can, Samantha, I would resolve this present crisis without
> creating a child, and leave that to the future. I fear making a
> mistake that would be terrible even if remediable, and I fear
> exercising too much personal control over humankind's destiny.
Removing the ability of humanity to do itself in and giving it a much
better chance of surviving Singularity is of course a wonderful goal.
But even if you call the FAI "optimizing processes" or some such it
will still be a solution outside of humanity rather than humanity
growing into being enough to take care of its problems. Whether the
FAI is a "parent" or not it will be an alien "gift" to fix what
humanity cannot. Why not have humanity itself recursively
self-improve? Why force a gift that shouts out that they are inferior
to an FAI that is not even a sentient mind? To be an extension of
humanity it must grow out of humanity.
Whatever we can do can be a huge mistake. But I think we both agree
that not acting would be the biggest mistake of all.
> Perhaps it is not possible even in principle, to build a nonsentient
> process that can extrapolate the volitions of sentient beings without
> ever actually simulating sentient beings to such a degree that we
> would see helpless minds trapped inside a computer. It is more
> difficult, when one considers that constraint. One cannot brute-force
> the problem with a training set and a hypothesis search, for one must
> understand enough about sentience to rule out "hypotheses" that are
> actual sentient beings. The added constraint forces me to understand
> the problem on a deeper level, and work out the exact nature of things
> that are difficult to understand. That is a good thing, broadly
> speaking. I find that much of life as a Friendly AI programmer
> consists in forcing your mind to get to grips with difficult problems,
> instead of finding excuses not to confront them.
I think it is not possible to extrapolate the volitions of sentient
beings without being (or becoming) a sentient being.
> I am going to at least try to pursue the difficult questions and do
> this in the way that I see as the best possible, and if I find it is
> too difficult *then* I will go back to my original plan of becoming a
> father. But I have learned to fear any clever strategy for cheating a
> spectacularly difficult question. Do not tell me that my standards
> are too high, or that the clock is ticking; past experience says that
> cheating is extremely dangerous, and I should try HARD before giving
> up.
>
This sounds really good. I am also fearful of "cheating a
spectacularly difficult question". But the clock indeed is ticking
and any standards that disallow finding a workable solution are indeed
"too high" or at least not appropriate to the constraints of the
problem.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT