From: Billy Brown (bbrown@conemsco.com)
Date: Tue Jan 12 1999 - 12:17:34 MST
Samael wrote:
> This would seem reasonable, if it were not for the fact that it searching
> for morals logically is like asking a computer for the meaning of life (an
> idea which Douglas Adams quite successfully took the piss out of in
> Hitch-hikers Guide to the Galaxy). The problem is that we don't know what
> the question means.
Who ever said IE was a simple logic-enhancement? IMO, the biggest problem
with any search for answers is the fact that the question won't fit in a
human mind. You start searching down ever-more-abstract chains of
reasoning, and eventually find yourself chasing your tail. If you want to
be an SI-philosopher, the first thing you need is an enhanced ability to
deal with complex systems of abstract, self-referential ideas.
> Asking what is moral effectively boils down to asking "What should I do in
> situation X?". The answer is that the question is meaningless. A
> syntacticalyl similar question that is meaningful is "What should I do in
> situation X do achieve Y." If you do not know where you are going, you
> cannot ask for directions. (Aaaah! Too much Zen! Must resist!).
>
> The main problem with most philosophical enquiry at a low/uneducated level
> is incorrect usage of the word 'why'. Frequently if you rephrase the
> question without using 'why' in it, you realize that the answer is easier
> than it appears. For instance "Why are we here" can be rephrased as 'what
> sequence of events caused us to be here." which can be analyzed
> scientifically, or it can be reduced to "What were the purposes of the
> people in coming here?" which can also be solved through enquiry. Most of
> the problems that people have in philosophy simply come from asking
> questions which don't actually make sense..
Maybe, maybe not. All we can really say is that *we* can't see any way of
resolving the problem - but I don't see any way to make a Grand Unified
Theory work, either. That doesn't mean someone else can't do it. Now, the
last few millennia of philosophical 'thought' gives us good reason to think
unenhanced humans will never crack the problem, but that still doesn't mean
an SI can't.
Of course, it might be that there truly is no possible way of grounding a
moral system in anything substantial. In that case an SI could probably
demonstrate that there is no objective morality, and you'd get to say "See,
I told you so." :-)
Even then, IMO it is important to make the effort to find the answer.
Billy Brown, MCSE+I
bbrown@conemsco.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:47 MST