Mike Lorrey wrote,
> Try 'sentient values'. Just because an entity has
> an IQ of 1000+ doesn't mean it doesn't have any values we would
> recognise. Whether such an entity has values like 'trust', 'empathy',
> etc doesn't distinguish it being strong or weak AI, only whether it is
> to be seen by the human race as malignant or benign.
The issue for me is whether the human race sees pure intelligence as malignant
or benign. For instance, if humans see the ability of AI to accurately
identify incorrect thinking as a frightening prospect, this corresponds to
seeing pure intelligence as malignant. In that case, I'll take the side of the
pure AI.
Pure intelligence is uncorrupted by sentient values, because sentient values
have no absolute or rigorous definition (which in turn makes the concept
useless for solving problems). Machine intelligence which perfectly solves
problems presented to it has a perfect IQ, which is not associated with human
IQ and therefore can't be measured by that method. For example, the
mathematical IQ of an electronic calculator which never makes mistakes is in
no way calibrated to human IQ scales. Intelligence (that is, the ability to
solve problems) which is impeded by notions of values (of any subjective kind)
will be susceptible to error, and will consequently be inferior to machines
which do not have this kind of impediment.
©¿©¬
Useless hypotheses, etc.:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, cryonics, individual
uniqueness, ego, human values
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:26 MDT