From: Johnicholas Hines (johnicholas.hines@gmail.com)
Date: Sun Mar 01 2009 - 13:36:56 MST
On Sun, Mar 1, 2009 at 2:48 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> My point is that ethical beliefs are not the same thing as the truth. I think we both understand that, but it is a difficult point to make. For some reason people continue to argue about what we should do as opposed to what we will do, as if our minds had free will as opposed to being programs.
Let me try to translate the commonsense understanding of "ethical"
language like "should" into "we-have-no-free-will-speak".
Humans are social animals, and can be influenced by the perceptions
and reasonings of other humans. The human ability to selectively
accept these influences, to sometimes "be persuaded", is one of the
many characteristic features of the species.
In order for a rational entity (which humans approximate to some
degree) to be persuaded to action, the message must contain evidence
of the action's utility.
Regarding evidence: The only kind of evidence that an untrusted
message can offer are certificates, for example, proofs. These
certificates can allow the receiver to quickly replicate and check a
chain of reasoning that the sender computed. For example, the
complexity class NP can be defined in terms of these certificates.
Regarding utility: The utility in question is the _receiver's_ utility
function, not the sender's.
A sentence like "You should do X because Y." might be understood as "I
am transmitting a certificate (Y) that contains evidence that the
action (X) will lead to higher values of your utility function, as I
understand it."
Because humans are irrational animals, it is common for flawed
certificates to be sent, either because the sender made a mistake, or
because the sender knows that the receiver might make a mistake in
checking the certificate. Because humans are fuzzy, informal
reasoners, the "certificates" offered are routinely informal and
require considerable computation to unpack and check on the receiver's
side.
Someone who says "You should accept this radical medical procedure
because it is analogous to saving your life, which you value." is
definitely not claiming that minds have free will as opposed to being
programs. This is merely one human attempting to persuade another
human to action.
If someone else says "You should not accept this deliberate temporary
induction of clinical death, because it is analogous to killing you
and creating a different entity from your corpse.", they're also using
"ethical" language, but there's still no claim that minds have free
will.
Johnicholas
Note: I was very deliberate in my example. The medical procedure
discussed might be uploading, or it might be one of the surgeries
mentioned in "Controlled Clinical Death".
http://en.wikipedia.org/wiki/Clinical_death
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT