From: Tony Belding (tlbelding@htcomp.net)
Date: Mon Apr 27 1998 - 20:09:32 MDT
Anders Sandberg <asa@nada.kth.se> wrote:
AS> I realize that their complexity is less than the chemical networks of
AS> many bacteria, so I don't feel too bad about them. But this may become
AS> a real problem in the future - the other graduate students here (me
AS> included) are quite interested in creating a sentinent system if
AS> possible, and once we start to get close to that, then we are going to
AS> need to think much more about ethics.
So, what do you think about rule-based AI projects, like Cyc?
AS> But sentinent beings are ends in themselves in some sense.
Yeah, but an end that must be very carefully considered.
I've thought about what kind of philosophical /purpose/ or role such artifical
sentient beings could fill. They would, in practical terms, be a branch of
humanity. They would be products of our culture, our civilization --
essentially humans in different form. No identity of their own.
When I was young I read lots of SF stories about aliens. Aliens were cool!
There's something to be said for the idea of humanity not being alone in the
universe. But, it now seems unlikely that our galaxy is teeming with aliens
just waiting for us to come and meet them. We need to create our own aliens.
I would like to find an Earth-like planet somewhere, or maybe terraform a
planet, and create a race of beings to inhabit it. Drop the first generation
on the planet's surface with nothing: no language, no tools, no experience.
Then sit back and watch as they build their own civilization from nothing: a
slow and painful process, no doubt. But, everything they had would be theirs,
not something borrowed from us. They could develop a complete identity of
their own.
AS> Just you wait until the first AI is interviewed on CNN and starts to
AS> quote Martin Luther King... :-)
I shudder. What if that AI turns out to be one of the non-sentient ones,
under my definition? The bush robot I described earlier could certainly put
on that sort of performance, if it's owner told it to.
What happens when a highly sophisticated, non-sentient AI can pass the Turing
Test? How do you convince most people that a machine is non-sentient when it
can so convincingly /pretend/ to be?
-- Tony Belding http://hamilton.htcomp.net/tbelding/
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:00 MST