Re: The dumb SAI and the Semiautomatic Singularity

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jul 08 2002 - 15:14:18 MDT


Mike & Donna Deering wrote:
>
> Would it be possible to change the design slightly to avoid volition,
> ego, self consciousness, while still maintaining the capabilities of
> complex problem solving, self improvement, super intelligence?

As I pointed out earlier, no matter how impossible something seems, it
could always be possible to an entity of slightly higher intelligence,
even within your own species.

But I don't know how to do it.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT