From: Matt Mahoney (matmahoney@yahoo.com)
Date: Thu Apr 03 2008 - 20:37:11 MDT
--- Rolf Nelson <rolf.h.d.nelson@gmail.com> wrote:
> For the duration of this thread, assume that FAI is the best use of
> time and resources for a rational altruist. How should resources be
> prioritized, in terms of marginal utility?
Aren't we jumping ahead? We have yet to solve the very non-trivial problem of
defining what "friendly" means.
I am aware of CEV, but this is a "human-centered" definition. What happens
when the boundary between human and non-human becomes fuzzy? Should AI be
friendly to robots, uploads, humans with altered or programmable goals,
copies, potential copies, animals with augmented brains, human-machine hybrids
in all their millions of variations, distributed or collective intelligences?
If so, how? If an entity wants to be put in a degenerate state of bliss, or
die, or have its memory erased or programmed randomly, should an AI comply?
Such questions only seem to lead to endless debate with no resolution. How
can we ask what we will want when we don't know who "we" will be?
I prefer the approach of asking "what WILL we do?" because "what SHOULD we
do?" implies a goal relative to some intelligence whose existence we can't
predict. We implicitly assume this intelligence to be human, but that doesn't
make sense in a posthuman world.
I believe AI will emerge in distributed form on the internet, a hybrid of
carbon and silicon based intelligence. I proposed
http://www.mattmahoney.net/agi.html as a starting point: a protocol in which
peers compete to be useful to humans. Initially it will be what many consider
to be friendly, although not in the sense of CEV. Peers are rewarded with
resources for satisfying immediate human goals, not our extrapolated volition.
Competition for resources is a stable goal under recursive self improvement,
but this competition can also lead to cheating, stealing, formation of
alliances and war. Also, I believe that when the balance of computation
shifts to silicon, that the protocol will shift from natural language to
something incomprehensible and humans will be left behind.
These are the types of problems we need to study, but not with the goal of
achieving a particular outcome.
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT