Re: Vegetarianism and Ethics

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sat Dec 14 1996 - 22:06:03 MST


> I'm not sure what you see as the distinction between 'justification' and
> 'value'.
In Lenat's AM, the justification slot was "because primes are
interesting right now" while the value slot was 500.

> What does negative value mean for goals?
A negative goal is something to be avoided.

> How is "Power-perfect reasoning" different from human reasoning?

A Power can rebuild its own architecture if it gets in the way or
becomes too limited. Humans might never find a Meaning because we
aren't built to think about self-justifying things - our continuing
failure to find the inherently obvious First Cause comes to mind - and
our emotional architectures mess up our goal-evaluation systems.

> Can you really have a goal-less architecture?

Sure. A spreadsheet program comes to mind. It would be pretty
difficult to build a thinking being with no goals, but I suppose one
could try.

> Of course, there is no guarantee that the system will ever find an
> Ultimate Meaning. Halting problem...

Frankly, the main problem on my mind was whether there *was* one, not
whether the system would find it. If there is one, I think we can rely
on a Power seeing it as obvious once the right cognitive architecture is
in place, just like the First Cause.

(As always, I'd like to remind my audience that the First Cause is
obvious to Nothingness and should therefore be equally obvious to any
mind as complex as the basic substrate of reality, whatever it is.)

-- 
         sentience@pobox.com      Eliezer S. Yudkowsky
          http://tezcat.com/~eliezer/singularity.html
           http://tezcat.com/~eliezer/algernon.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:53 MST