From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Mon Jun 14 2004 - 22:46:13 MDT
Jef Allbright wrote:
> Eliezer Yudkowsky wrote:
>>
>> I see no reason why I should care about genes or memes except insofar
>> as they play a role in individuals built by genes who are running
>> memes. What exerts the largest causal influence is not necessarily
>> relevant to deciding what is the *important* aspect of humanity; that
>> is a moral decision. I do not need to make that moral decision
>> directly. I do not even need to directly specify an algorithm for
>> making moral decisions. I do need to tell an FAI, in a well-specified
>> way, where to look for an algorithm and how to extract it; and I am
>> saying that the FAI should look inside humans. There is much
>> objection to this, for it seems that humans are foolish. Well, hence
>> that whole "knew more, thought faster etc." business. Is there
>> somewhere else I should look, or some other transformation I should
>> specify?
>>
> You're asking good questions, and the process of asking these
> increasingly accurate questions will lead to increasingly accurate
> solutions.
>
> The vector sum of current human volition is not wisdom. It's not even
> an early approximation of wisdom. In fact, it's currently badly skewed.
There is no such thing as the vector sum of "current human volition". You
are speaking of the vector sum of "current human *decision*" which, I quite
agree, would be disastrous to hook up to an SI (or RPOP). A volition is
extrapolated beyond the self of this moment; a *collective* volition
extrapolates a planet beyond the selves of this moment, and cannot be
regarded as a vector sum of individual volitions.
> The vector sum of current human volition does not represent wisdom, let
> alone embody wisdom. Acting as if it did would invite disaster, given
> the current state of human development.
I agree, provided that you are talking about the vector sum of current
human decision.
> The answers you seek do not
> exist yet, no matter how deeply and widely one might be able to probe
> the collective human psyche.
You cannot read out the volition from an LED display on the back of
someone's neck; it has to be extrapolated.
Decision is not volition.
Volition is not decision.
> There are no pointers, maps, or
> transformations of this collective data that could be directly applied
> to the solution you (we all) seek. The current data set is strongly
> skewed toward short term, local scope thinking.
If I had any doubt before that you were speaking of "decision" rather than
"volition", it would be gone now.
> The answers do not
> exist in the current data set but we can expect they will emerge only as
> part and result of the process.
...the process which a "collective volition" attempts to extrapolate, yes,
that is the whole idea of collective volition, getting an extrapolated
approximate satisficing advance answer to the process.
> Yes, wisdom is present within the collective landscape, but most of
> humanity perceive and consider only a small portion of the whole, and
> the answers you seek within it require a broader scope of human
> intelligence. The seeds exist but they have not yet grown, and it is
> impossible to see the tree without planting, nurturing, and waiting for
> the seeds to grow.
And you know this... how? It sounds very wise and I suspect it simply
isn't true.
> To model the collective volition of humanity is a worthy goal, not to
> extract from it the ideal human volition, or even an a starting
> approximation of the ideal, but for the purpose of better understanding
> and contributing to the process that will get us wherever we will be in
> the future. There is much work that can be done to improve the process
> of humanity getting closer to its evolving goals, but progress will be
> made by building upon the foundations of morality rather than futilely
> preparing to prune a unique and yet unknown tree when it is still a seed.
A collective volition doesn't prune. It extrapolates and superposes
uncertainties.
> What is moral, in the minds of people of disparate backgrounds, tends to
> converge as their understanding and interests broaden.
This is an assertion that the *collective volition* tends to cohere more
than the vector sum of decisions.
> As the scope
> expands to include broader space of interaction, broader range of
> interacting parties, and broader time period being considered, "what is
> moral", tends to converge into an ever clearer sense of shared
> direction. You can only get there by performing the interactions -- a
> model of sufficient accuracy would take just as long to run the
> simulation as the reality
Again, I think this sounds wise and simply isn't true. Here you are,
making all sorts of abstract predictions about the collective volition
without running the simulation!
> -- but you can extract principles of
> successful interactions along the way and apply these principles toward
> "promoting the good."
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT