From: Thomas McCabe (pphysics141@gmail.com)
Date: Thu Apr 24 2008 - 14:15:30 MDT
On 4/24/08, Matt Mahoney <matmahoney@yahoo.com> wrote:
> --- Mike Dougherty <msd001@gmail.com> wrote:
>
> > I have reviewed Shock Levels. There is currently nothing that mere
> > mortals may discuss that is SL4. I spent a long time waiting for a
> > discussion that was truly on-topic for the list.
> >
> > Is it even possible for an SL4 thread to be discussed?
> >
> > I'll wait for an SL4 topic before posting again.
>
> I also reviewed http://www.sl4.org/shocklevels.html
See http://www.acceleratingfuture.com/michael/works/shocklevelanalysis.htm
for a more detailed analysis.
> I will try. First I adjust my belief system:
>
> 1. Consciousness does not exist. There is no "me". The brain is a computer.
See http://www.overcomingbias.com/2008/03/heat-vs-motion.html.
> 2. Free will does not exist. The brain executes an algorithm.
See http://www.overcomingbias.com/2008/03/wrong-questions.html.
> 3. There is no "good" or "bad", just ethical beliefs.
See http://www.overcomingbias.com/2007/11/thou-art-godsha.html.
> I can only do this in an abstract sense. I pretend there is a version of me
> that thinks in this strict mathematical sense while the rest of me pursues
> normal human goals in a world that makes sense. It is the only way I can do
> it. Otherwise I would have no reason to live. Fortunately human biases
> favoring survival are strong, so I can do this safely.
See http://yudkowsky.net/tmol-faq/meaningoflife.html. Be warned that
this paper is obsolete,
> My abstract self concludes:
>
> - I am not a singularitarian. I want neither to speed up the singularity nor
> delay it. In the same sense I am neutral about the possibility of human
> extinction (see 3).
Are you totally neutral about the possibility of getting shot? If no,
the former includes the latter. If yes, please seek psychological help
immediately.
> - AI is not an engineering problem. It is a product of evolution (see 2).
See http://www.overcomingbias.com/2007/11/no-evolution-fo.html.
> - We cannot predict the outcome of AI because evolution is not stable. It is
> prone to catastrophes.
See http://www.intelligence.org/upload/futuresalon.pdf.
> - "We" (see 1) cannot observe a singularity because it is beyond our
> intellectual capacity to understand at any pre-singularity level of intellect.
>
> - A singularity may already have happened, and the world we observe is the
> result. We have no way to know.
>
> Discussions about friendliness, risks, uploading, copying, self identity, and
> reprogramming the brain are SL3. SL4 makes these issues irrelevant.
>
>
>
> -- Matt Mahoney, matmahoney@yahoo.com
>
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT