From: Anders Sandberg (asa@nada.kth.se)
Date: Sun Mar 10 2002 - 03:42:08 MST
On Sat, Mar 09, 2002 at 05:54:36PM -0500, Eliezer S. Yudkowsky wrote:
> Personally, my own archetype for "comic-book depiction of fake genius" is
> Adrian Veidt, from "The Watchmen". As far as I can tell, the moral of your
> tale of Dr. Doom is the same as the moral of "The Watchmen": Comic-book
> characters are never any smarter than the authors, regardless of whether the
> other characters call them "geniuses", "supergeniuses", "the smartest man in
> the world", et cetera. If these characters had real intelligence on the
> remote order of that ascribed to them in the story, they would not need to
> resort to taking over the world in order to fix it. A competent world-fixer
> should be able to solve everything wrong with Earth using a medium-sized
> research project and no military force or political coercion.
Complex problems sometimes have simple solutions, but it is very rare.
As a conjecture I would say that the solution to a problem usually
requires roughly the same algorithmic complexity as the causes of the
problem. This is why it is so easy to solve an engineering problem - the
laws of physics are fairly simple, and the specifications usually not
too complex. Social problems on the other hand are extremely complex,
involving many autonomous agents controlled by complex minds that create
complex interaction structures all the time. This does not mean social
problems are unsolvable, but that usually partial solutions or solutions
that we know are worse that the imagined global optimum will have to do.
The best solutions are those that harness the inherent complexity of the
system itself to regulate it. Instead of imposing a simplistic order
from the outside, they allow a complex order to grow from inside.
A supergenius able to solve the "problem" of the state of the world
would need a mind with a power equivalent to a sizeable fraction of the
world. Its solutions may very well be utterly innovative and ingenious
and far more elegant than invading someplace (although there is no a
priori reason why violence might not be a relevant part of the
solution). The problem is that this assumes there is *one* such a being;
even if the smartest entity is twice as smart as the next smartest, if
there are enough complex entities running around the situation will
still overwhelm the hypothetical supermind. Even when one imagines a
vastly more powerful entity, the sheer mass of complexity in the world
places some huge demands on it.
This is why I don't believe in central planning of societies, why
self-organizing bottom-up institutions usually outperform the centralist
top-down institutions in the long run and why it is better to let people
decide for themselves what to do. It is also why I am suspicious of
assumptions that technology will "fix" things on its own - it assumes
that the introduction of a certain technology will lead to a fairly
predictable cultural shift, which leaves out the real complexity of the
sociocultural side of the equation.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:54 MST