From: Samantha Atkins (samantha@objectent.com)
Date: Mon Mar 11 2002 - 01:03:24 MST
Anders Sandberg wrote:
> On Sat, Mar 09, 2002 at 05:54:36PM -0500, Eliezer S. Yudkowsky wrote:
>
>>Personally, my own archetype for "comic-book depiction of fake genius" is
>>Adrian Veidt, from "The Watchmen". As far as I can tell, the moral of your
>>tale of Dr. Doom is the same as the moral of "The Watchmen": Comic-book
>>characters are never any smarter than the authors, regardless of whether the
>>other characters call them "geniuses", "supergeniuses", "the smartest man in
>>the world", et cetera. If these characters had real intelligence on the
>>remote order of that ascribed to them in the story, they would not need to
>>resort to taking over the world in order to fix it. A competent world-fixer
>>should be able to solve everything wrong with Earth using a medium-sized
>>research project and no military force or political coercion.
>>
>
> Complex problems sometimes have simple solutions, but it is very rare.
> As a conjecture I would say that the solution to a problem usually
> requires roughly the same algorithmic complexity as the causes of the
> problem. This is why it is so easy to solve an engineering problem - the
But often the "cause" is relatively simple once it is found
compared to the complexity of the effects. And often the cause
is much simpler to address. I don't believe that an engineering
solution will fix some of the most pressing problems. But I
also don't believe it necessarily or even generally takes equal
complexity to solve them.
> laws of physics are fairly simple, and the specifications usually not
> too complex. Social problems on the other hand are extremely complex,
> involving many autonomous agents controlled by complex minds that create
> complex interaction structures all the time. This does not mean social
> problems are unsolvable, but that usually partial solutions or solutions
> that we know are worse that the imagined global optimum will have to do.
Unless you significantly change the nature of the agents and the
context of their interaction.
> The best solutions are those that harness the inherent complexity of the
> system itself to regulate it. Instead of imposing a simplistic order
> from the outside, they allow a complex order to grow from inside.
>
All other things being equal, this is true. But how long do we
expect enough other things to be equal? If enough of the
fundamental context changes then a complex order may develop
alright but the result is not the original system regulating
itself necessarily. The original system has been transformed.
> A supergenius able to solve the "problem" of the state of the world
> would need a mind with a power equivalent to a sizeable fraction of the
> world. Its solutions may very well be utterly innovative and ingenious
Again, not necessarily. Complexity can and often does grow out
of only a hand full of active elements.
> and far more elegant than invading someplace (although there is no a
> priori reason why violence might not be a relevant part of the
> solution). The problem is that this assumes there is *one* such a being;
> even if the smartest entity is twice as smart as the next smartest, if
> there are enough complex entities running around the situation will
> still overwhelm the hypothetical supermind. Even when one imagines a
Unless the relatively smart manage to create a hugely more smart
and wise mind than their own and that mind can find and
understand the active elements and the effects of shifting and
modifying those elements. Even this will not be enough for all
of us to grow beyond our current limitations or for our
institutions to transform.
> vastly more powerful entity, the sheer mass of complexity in the world
> places some huge demands on it.
>
It depends on how you grok the complexity and what is being
attempted relative to that complexity.
> This is why I don't believe in central planning of societies, why
> self-organizing bottom-up institutions usually outperform the centralist
> top-down institutions in the long run and why it is better to let people
> decide for themselves what to do. It is also why I am suspicious of
> assumptions that technology will "fix" things on its own - it assumes
> that the introduction of a certain technology will lead to a fairly
> predictable cultural shift, which leaves out the real complexity of the
> sociocultural side of the equation.
>
I agree in that I don't believe the technology, not even an SI,
will solve the problem of the transformation of humanity and
especially of its evolutionary baggage in the way of its own
contuing progress. I think the full problem will take some time
(natural "real-world" or subjective) to "solve".
- samantha
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:55 MST