Re: debate, morality

From: Ross A. Finlayson (raf@tiki-lounge.com)
Date: Mon Nov 29 1999 - 12:21:50 MST


Harvey Newstrom wrote:

> Rob Harris <rob@hbinternet.co.uk> wrote:
>
> >after all, cats leave
> >"offerings" to their human "gods" too, but I suspect that it has bases in
> >other things like fear of death, and self-glorification.
>
> You are reading too much into the cats' motivation.
>
> --
> Harvey Newstrom <mailto://newstrom@newstaffinc.com>
> <http://harveynewstrom.com>
> Author, Consultant, Engineer, Legal Hacker, Researcher, Scientist.
> ----- Original Message -----

Well, you see, the cats sometimes care, and otherwise do not. The cat hunts for
its own pleasure, as it is fed by us gullible humans, although perhaps the game
caught on the hoof, as it were, is also pleasurable to the cat. The cat's
trophies are those, trophies. They are presented varyingly for shock value or as
a sign of respect.

In terms of morality, I would surmise that a higher percentage polled would find
it immoral to skin a cat as opposed to, say, butcher a cow. I had an
interesting side-passage with one of the other list readers about morality
recently, in terms of an AI, and one of my statements was that morality is
foremost applied in-species, so that it generally considered the highest moral
offense to slay a fellow human, and generally lesser intelligent animals are not
quite so sacred. An AI is a an algorithm, not an intelligence. Even a
self-programming AI is, recursively, programmed by humans at some point, who
provide its sole spark. Perhaps this is too blunt, but it is always an
alternative to simply kill the cat.

In terms of debate, debate is about two sides. I am a casual, extemporaneous
debater, it's always nice to have the luxury to pick a side as opposed to having
it forced upon you. I feel that I could argue anything. Ethically, I can only
argue the truth.

We were talking about AI and preserving its behavioral modality or something,
this is not completely simple issue. One critical point is that of of the AI's
eventual sense of "self-preservation". This would impugn upon what might
otherwise be a stable moral asnd ethical foundation. Objectively, as the
computer is trained, first to understand the necessity (or lack of it) and
significance of "conscience", and an individual conscience, which is often
unnecessarily linked with regret, and thus extensively of morals and ethics,
then the computer is trained to rationally and objectively analyze various moral
and ethical aspects of interpersonal relationships. It is when the computer
itself is one of these relators, then, as in the case of any human participant,
objectivity is immediately non-existent. So, the computer, when eventually
considering any situation involving itself, which it would, otherwise not being
intelligent, eventually would encounter, absolutely, subjectivity. Here is
where it would need guidance.

Ross Finlayson



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:53 MST