From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jul 23 2001 - 00:42:26 MDT
(Originally sent as a private email to Damien; Damien asked that I forward
it on to the list. Slightly edited/augmented.)
==
My take on minimum guaranteed income (MGI):
1) To the extent that the disruption you postulate is *not* being
produced by ultraefficient AIs, it may not be a good investment for me to
form an opinion. Often there *are* no good answers to pre-Singularity
moral questions, and I tend to view pre-Singularity capitalism as a means
to an end. A lot of MGI debate seems to turn on whose "fault" it is. I
don't *care* whose fault it is, and since I'm trying to solve the problem
by means other than proposing new structures of social humans, I have no
*need* to decide whose fault it is. This is a moral question that will
make no sense post-Singularity, and my social contribution consists not of
rendering a judgement on the moral question and helping to produce social
conformity, but in working to turn the moral question into nonsense.
2) To the extent that the disruption you postulate is being produced by
efficient but not hard-takeoff AIs, a rapid-producing Friendly AI would
not care about green pieces of paper, or even vis own welfare,
except as means to an end, and would be expected to approach philanthropy
rather differently than humans. Possibly one of the few plausible paths
to a minimum guaranteed income without the need for government.
This scenario is highly implausible due to the expected Singularity
dynamics, and I mention it purely so that I can't be accused of "blowing
off a possible problem" just because it's a Slow-Singularity scenario.
3) To the extent that the disruption you postulate is being produced by a
hard takeoff, I don't expect any problems as a result; quite the
opposite. A nanotech-equipped Transition Guide is perfectly capable of
giving poor people as well as rich people what they want, if in fact the
Transition Guide would even notice the difference, which is unlikely.
("Notice" in the decisionmaking, not perceptual, sense). Under the Sysop
Scenario, everyone gets an equal piece of the Solar System, again
disregarding as utterly irrelevant any existing Earthbound wealth
inequities.
Basically, I don't see a minimum guaranteed income as being necessary or
desirable at any point in the next decade if the Singularity occurs in
2010; we don't yet have enough ultraproductivity to blow off the
production drop introduced by the incentive change of an MGI system, and I
don't expect to see that pre-Singularity. I don't expect the Singularity
to be delayed until 2030, but if it is, I'll be doing what I can on the
near-human Friendly AI side of it in the meanwhile.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:09:02 MST