From: Jef Allbright (jef@jefallbright.net)
Date: Wed Dec 14 2005 - 10:33:52 MST
We should all keep in mind that while there is a ratcheting forward of
knowledge and capability in service of our values, this is no
guarantee that we are not moving into an evolutionary cul de sac.
- Jef
On 12/14/05, micah glasser <micahglasser@gmail.com> wrote:
> I agree with Jef on the importance of having a framework of shared
> values/goals. I don't mean anything fancy shmancy when I posit the good as
> something objective. What I have in mind is precisely what evolution
> programmed us for. I believe that human evolution leads inexorably toward
> more efficient societies of humans that are more and more interconnected
> through their information technologies. The good is merely human
> flourishing, as the Greeks put it. So in my opinion if the term 'benevolent
> AI' has any meaning what so ever then it must mean that it either, in no way
> obstructs human flourishing (the good) or, preferably, it actually aids and
> facilitates this flourishing. What better way to ensure this state then to
> program AI to recognize human flourishing as the greatest state of affairs
> and to welcome the AI into human society as a fellow, though different,
> member. One more thing I must clarify. I believe (for a plethora of reasons)
> that all rational agents will necessarily have for their goal increasing the
> state of freedom as a super goal of the individual and society. If I am
> correct in this (and I am) then it will not be possible to program a truly
> rational agent without including achieving greater freedom (power/knowledge)
> as a super goal.
>
>
> On 12/14/05, Jef Allbright <jef@jefallbright.net > wrote:
> > On 12/14/05, David Picon Alvarez < eleuteri@myrealbox.com> wrote:
> > > From: "micah glasser" <micahglasser@gmail.com>
> > > Intelligence cannot help you ypu select for the good. The Good must be
> > > programmed into the AI. Once the AI knows what the Good is then its
> > > intelligence will surpass any human intelligence in figuring out how to
> > > obtain bringing about the Good. If the Good is failed to be programmed
> into
> > > the machine as its super-goal then it wil certainly be malevolent. Super
> > > intelligence is not a god. Its merely a tool.
> > >
> > >
> > > Were you programmed with the good? Are you certainly malevolent? What
> > > distinguishes you from an AI, evolution? Evolution doesn't bring about
> the
> > > good, it brings about what works in evolutionary environments, far from
> the
> > > good. If the good is objectively existent a super AI can find it, if not
> > > then there's no point in talking about "the good", we'd rather talk
> about
> > > what we want instead.
> > >
> >
> > David makes good points here, but interestingly, as we subjective
> > agents move through an objectively described world, we tend to ratchet
> > forward in the direction we see as (subjectively) good. Since we are
> > not alone, but share values in common with other agents (this can be
> > extended to non-human agents of varying capabilities) there is a
> > tendency toward progressively increasing the measure of subjective
> > good.
> >
> > Appreciating and understanding the principles that describe this
> > positive-sum growth would lead us to create frameworks to facilitate
> > the process of (1) increasing awareness of shared values, and (2)
> > increasing awareness of instrumental methods for achieving our goals.
> >
> > This paradigm would supersede earlier concepts of morality, politics
> > and government.
> >
> > In my humble opinion. ;-)
> >
> > - Jef
> >
>
>
>
> --
> I swear upon the alter of God, eternal hostility to every form of tyranny
> over the mind of man. - Thomas Jefferson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT