From: Anders Sandberg (asa@nada.kth.se)
Date: Sat Nov 16 2002 - 15:02:32 MST
> "Robert J. Bradbury" <bradbury@aeiveos.com> Wrote:
>
> There seems to be a fundamental assumption in the libertarian perspective
> that there can *never* be a computer/algorithm sufficiently complex that
> it can optimize both local and global conditions (i.e. squeeze the
> waste and redundancy out of the economy).
> Given the Moore's Law advancement in computational capacity that will
> soon shift over onto the nanotech based track giving us 1 cm^3 computers
> with the capacity of 10^5 human minds -- I *really* have to question
> whether local (personal) decisions will always trump global (planned)
> decisions.
OK, let us make a thought experiment. I have built the new Golem XV
computer, a few cubic kilometers (see at the end for specs) of
computronium able to simulate all of humanity and the goings on in the
entire biosphere at a sufficient precision to give accurate predictions
with a given level of precision for significantly long periods (say a
year for detailled forecasts and a decade for more sketchy estimates).
It uses neutrino scanners to constantly build an accurate picture of how
the world is right now. It can even be run in "relaxation mode", where
it makes a prediction, predicts the consequences of it telling the
prediction, and iterates this until a stable result is reached. How do
we use this oracle to bring about a nice economic allocation system?
Let's say we try to use it to run a planned economy. We try to make
everybody equally well off in some material sense. We make an
allocation scheme, and it predicts the outcome. OK, scheme 1 leads to a
scarcity of food in Manhattan, so let's up the bread allocation there.
Now scheme 2 leads to starvation in Chad. OK, we change the settings a
bit. Scheme 3 produce other unexpected consequences. And so on, because
the entire human system is chaotic and a microscopic change in initial
conditions leads to macroscopic changes in the result after a while
(this is also why the relaxation mode is a very suspect assumption). But
suppose we iterate until we reach an allocation scheme that provides
enough for everybody. This scheme will have several problems. One is
that it is likely unstable to external influences of sufficient size
(say a meteor or a solar fluctuation). The more serious is that a lot of
people will be unhappy with it - the artist who was told to bake bread
instead of paint, the mother who thinks her child gets too little to eat
or the transhumanist who is not given any nanoupgrades. These will be
claiming the system is unfair, since it doesn't allow them to pursue
their own lives. Claiming that this is the optimal solution won't
convince them, since they could reasonably claim that this is a local
solution and there is another solution that makes them more well off.
OK, let's be utilitarian and make everybody equally happy instead. How
do we do that with Golem XV? We could measure the spikes of the dopamine
neurons in the ventral tegmental area, but it isn't obvious if that
really tells us how happy people are - are large brains happier? And
happiness is something dynamic that changes over time, so we have to
look at long averages. The only way that seems reasonable is to run the
world simulation, copy each person into a buffer at regular times and
ask them using some psychological method to rate how happy they are (a
less invasive version would simply make estimates based on behavior and
brain activity). In the end we get a new allocation scheme, likely
exceedingly complex and impossible to untangle. But everybody would be
nearly equally happy (by the estimates of Golem XV). Unfortunately this
is likely not a very high level of happiness - the artist still likely
makes the world much happier with his good bread, while his tortured
drawings only make himself happy. It is easier to cut off the mountain
tops to fill the valleys than to raise the land.
OK, let's maximize total happiness instead. Hmm, is it even additive? We
can of course let Golem maximize its happiness estimate (since it by our
orders have created such an estimate). But this runs into all the usual
problems with utilitarianism that fill the philosophy books - some
odious people will get killed, lots of individual life projects broken
on the altar of public good. And the less happy are claiming this local
minimum is bad. The usual utilitarian approach is to complicate things
by extending/deepening the meaning of utility, producing things like
rule utilitarianism. But running these approaches by Golem XV produces
sets of rules of actions ("laws") that are very different from the kind
of economic resource allocation systems that have been deviced. It seems
likely that they would involve various forms of free trade.
OK, let's try to maximize total wealth instead. Wealth is additive,
after all. Or is it? While money is an additive measurement of value,
value is something subjective (I value classical music over rap music
and would pay more for it than a rap fan would). A price is an estimate
of the average value people see in something. This makes it impossible
in the same way as comparing happiness to compare the total value people
experience of their wealth (which is after all what we really care
about). We might be able to use Golem XV to maximize the amount of gold
and diamonds everybody had, but that would not be experienced as wealth
by many people who would prefer to have the wealth in other forms.
In the end we run into the same problems again and again: those less
well off in a planned system will have merit to claim the system is
unjust (why should *they* be less well off). The incommensurability of
happiness, utility and value between people (since it *at best* involves
a translation between different brains - my 'mountain' symbol is
different from yours; it could also be that these things are
qualitative and cannot be compared even in principle) makes systems that
makes people subjectively equally well-off impossible.
In addition we have the big assumptions this scenario rests on: a
society that is so simple that all its interactions can be monitored,
modelled and predicted by an entity which is in contact with the
society. This entity is also assumed to be able to set any laws,
regulate all policy and business with no hindrances. It is also able to
find at least stable states of the future evolution of the society that
maximize certain things within a short time.
[SF reference: The Machine in A.A. Attanasions IMHO awful novel
_Centuries_. It didn't aim at maximizing everything, it's edge was its
ability to *convince* people to go along with good courses of action. ]
[ OK, what would it take to do this for the current civilization?
Assume a Matrix system equivalent to the full uploading of six billion
people, that is around 1e24 synapses and a model of the Earth's
biosphere down to millimeter size (I assume a smart compression where
boring stratosphere is sampled less carefully than a complex insect; it
is still rather likely too crude), that is about 1e28 bits. A diamond
lattice with one bit per atom would store 1e23 bits/cm3, so all this
could be stored in just a desktop device (!). Processors are presumably
somewhat larger. The devil lurks in the computations. If we assume the
entire biosphere runs at the same speed the human brain seems to run, it
needs to have around 1e3-1e4 updates per subjective second. To run a
year would require around 3e10 ticks of this system. Each update of each
little object is likely something akin to a few thousand floating point
operations, say around 1e5 basic operations. If we assume each tick to
take 1e-15 s (around the maximum allowed for molecular bonds) it would
take just 1 second to finish. Power dissipation is the killer. Such a
run would presumably have to erase at least a fraction of the bits
generated, and 1e38 operation times kB ln 2 T at room temperature
(~3e-21 J) needs ~1e17 W. Even a small fraction of that is rather too
large for most molecular systems to stand. Also, this assumes no
communication delays, but even a one meter cube has a 1e-9 second delay
between two parts at the edges, making the system a million times slower
if it has to wait for data to be moved around. OK, let's run it slower
and avoid having it blow up. But if a simulation takes a million seconds
it will take 11 days. Maybe not so bad if it is a long range forecast,
but if we need to iterate it to find an optimum we will want to run it
several times. But we can't iterate more than 33 times if we want some
of the year to remain. And 33 iterations look hardly good enough to
guarantee people this is the best optimum there is.
I'm sure somebody can come up with a better design, but the
computational effort to simulate a low tech civilization like us on one
planet is significant. And a civilization able to build something like
this will very likely have nearly as complex devices or beings within
itself.
]
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:11 MST