RE: BOUNCE extropians@extropy.org: Message in HTML

From: Smigrodzki, Rafal (SmigrodzkiR@msx.upmc.edu)
Date: Wed Mar 20 2002 - 16:47:02 MST


> It took me a long time to answer Eliezer's post , work pressure has been
> unrelenting.
>
> Me;
>
> > ### Do you think that the feedback loops focusing the AI
> on a particular
> > problem might (in any sufficiently highly organized AI)
> give rise to
> > qualia analogous to our feelings of "intellectual unease",
> and "rapture
> > of an insight"?
>
> Eliezer:
>
> I do not pretend to understand qualia. But what focuses the
> AI on a
> particular problem is not a low-level feedback loop, but a
> deliberately
> implemented feedback loop. The AI controls the feedback.
> The feedback
> doesn't control the AI. Unless an FAI deems it necessary to
> shift to the
> human pleasure-pain architecture to stay Friendly, I can't
> see vis mental
> state ever becoming that closely analogous to human
> emotions.
>
> #### I tend to think that qualia might be the unavoidable
> accompaniment of information processing. This is in a way equivalent to
> saying that consciousness is a feature of every information processor,
> albeit to a varying degree. My PC has the amount of consciousness (if you
> excuse such a vague expression) perhaps equivalent to a beetle. An SAI
> would have much more of this precious quality. Whether there would be an
> analogy to human emotions would depend on the amount of similarity between
> the panhuman template and the SAI's motivational mechanisms, but I do
think
> that qualia of some sort would be present.
>
> Eliezer:
>
> Another academically popular theory is
> > > that all people are blank slates, or that all altruism
> is a child goal
> > > of selfishness - evolutionary psychologists know better,
> but some of the
> > > social sciences have managed to totally insulate
> themselves from the
> > > rest of cognitive science, and there are still AI people
> who are getting
> > > their psychology from the social sciences.
> >Me:
> > ### Is altruism something other than a child goal of
> selfishness?
>
> Eliezer:
>
> Within a given human, altruism is an adaptation, not a
> subgoal. This is
> in the strict sense used in CFAI, i.e. Tooby and Cosmides's
> "Individual
> organisms are best thought of as adaptation-executers rather
> than as
> fitness-maximizers."
>
> Me:
>
> What is an adaptation if not an implementation of a subgoal
> of a goal-directed process? Whether you view a behavior as an adaptation
or
> a goal in itself depends on your value set. Since I tend to see my
survival
> as the one foundation of my goal system, all other elements of my
> personality are either adaptations (e.g. controlled altruism allows me to
> participate better in the society) or obstacles (e.g. laziness). Other
> thinkers might adopt the (hypothetical) view-point of "Nature", "The Human
> Being", etc., yet independent, absolute criteria can only indirectly force
a
> choice of any of these outlooks - outlooks incompatible with long-term
> survival are likely to disappear from the pool of participants in the
> discussion.
>
> Eliezer:
>
> > > total altruists ninety-nine point nine percent of the
> time. Thus, even
> > > though our "carnal" desires are almost entirely
> observer-centered, and
> > > our social desires are about evenly split between the
> personal and the
> > > altruistic, the adaptations that control our moral
> justifications have
> > > strong biases toward moral symmetry, fairness, truth,
> altruism, working
> * > for the public benefit, and so on.
> *
> Me :
> > ### In my very personal outlook, the "moral
> justifications" are the
> > results of advanced information processing applied
> in the service of
> > "carnal" desires, supplemented by innate, evolved
> biases.
> Eliezer:
>
> By computation in the service of "carnal" desires,
> do you mean computation
> in the service of evolution's goals, or computation
> that has been skewed
> by rationalization effects toward outcomes that the
> thinker finds
> attractive? In either case the effective parent
> goals are not limited to
> "carnal" desires.
>
> Me:
>
> I think that it would be counterproductive to say
> that evolution has goals. While evolution leads to some future state, this
> state is not a goal. You need to have a representation of a state within a
> structure capable of controlled behavior (human brain, thermostat) for
this
> state to be called a goal.
> Did you say "carnal" in the strict sense of
> "related to the bodily, especially sexual, needs", or in the looser
meaning
> of "related to subjective, self-oriented needs"? My analysis was based on
> the latter interpretation.
>
> Me:
>
> > The initial
> > supergoals are analyzed, their implications for
> action under various
> > conditions are explored, and the usual normative
> human comes to
> > recognize the superior effectiveness of fairness,
> truth, etc., for
> > survival in a social situation.
>
> Eliezer;
>
> I think this is a common misconception from the "Age
> of Game Theory" in
> EP. (By the "Age of Game Theory" I mean the age
> when a game-theoretical
> explanation was thought to be the final step of an
> analysis; we still use
> game theory today, of course.) Only a modern-day
> human, armed with
> declarative knowledge about Axelrod and Hamilton's
> results for the
> iterated Prisoner's Dilemna, would employ altruism
> as a strict subgoal.
>
> Me:
>
> I disagree. Even ancient philosophers used imaginary
> scenarios (like the ring of Gyges) to derive a condemnation of selfishness
> as incompatible with orderly social life. Game theory gave us a more
> rigorous foundation for such reasoning but it is not absolutely necessary
> for reaching the conclusion.
>
> Eliezer;
>
> And even then the results would be suboptimal
> because people instinctively
> mistrust agents who employ altruism as a subgoal
> rather than "for its own
> sake"... but that's a separate issue.
>
> Me:
> Yes. We want to deal with humans who are prevented
> from dangerous actions by strong, hardware-level injunctions, rather than
> flimsy, situation-dependent calculations of cost and benefit.
>
> Eliezer:
> A human in an ancestral environment
> may come to see virtue rewarded and wickedness
> punished, or more likely,
> witness the selective reporting of virtuous rewards
> and wicked follies.
> However, this memetic effect only reinforces an
> innate altruism instinct.
> It does not construct a cultural altruism strategy
> from scratch.
>
> Me
>
> Yes, the moral instincts are very useful, even after
> cultural development supplements them with memes and social institutions.
>
End of part 1

Rafal



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:03 MST