Anders Sandberg wrote:
>
> On Fri, Aug 03, 2001 at 03:03:44AM -0400, Brian Atkins wrote:
> > Reason wrote:
> >
> > I have yet to see a better solution to the issue. At some point the matter
> > (as in atoms) must fall under someone's control, and personally I don't
> > relish the idea of having to constantly protect myself from everyone else
> > who can't be trusted with nanotech and AI. All it takes is one Blight to
> > wipe us out. That kind of threat does not go away as humans progress to
> > transhumanity, rather it increases in likelihood. What is the stable state
> > if not Sysop or total death? There may be some other possibilities, can
> > you name some?
>
> Stable states are dead states. I would say another possibility would be an
> eternally growing selforganised critical state - sure, disasters happen,
> but countermeasures also emerge. The whole is constantly evolving and
> changing.
When I say stable state I do not mean that in terms of anything other
than providing a certain specific underlying layer of "services" that
make it impossible to begin to slide towards the death stable state
(unless of course everyone suddenly decides to do that). Think of this
as something like the Constitution of the USA. It provides a basis to
prevent certain things from happening, yet it does not cap or limit what
grows on top of it.
>
> Having (post)human development constrained to a small part of the available
> technology/culture/whatever space in order to ensure safety is going to run
> into a Gödel-like trap. There are likely undecidable threats out there,
> things that cannot be determined to be dangerous or not using any finite
> computational capability. Hence the only way of ensuring security is to
Can you be more specific about this? It sounds to me like the people who
claim we will eventually find something uncomputable. How about an example.
The Sysop can simulate inside of itself anything it runs into, to find out
what it does. Only if something was "bigger" than the Sysop itself would
it be unable to do this, at least it seems that way to me.
> limit ourselves to a finite space - which goes counter to a lot of the core
> transhumanist ideas. Or the sysop would have to allow undecidable risks and
> similar hard-to-detect threats. One category of threats to worry about are
> of course threats the sysop would itself run into while looking for threats
> - they could wipe us out exactly because (post)humanity had not been
> allowed the necessary dispersal and freedom that might otherwise have
> saved at least some.
I think this is not making a lot of sense. Remember that the hypothetical
Sysop is primarily concerned (at least in its spare time :-) with ensuring
safety for all. If it did happen to run into something it couldn't handle
while say hitching along in some transhumanists' spaceship, it would most
certaily WARN everyone it could. Remember the Sysop Scenario only actually
happens if a FAI decides it is the best way to go. However that doesn't
preclude it from changing its mind later on if it turns out to be a
failure or a better way is discovered.
>
> This is essentially the same problem as any enlightened despot scheme (and
> there is of course the huge range of ethical problems with such schemes
> too), put in a fresh sf setting. Enlightened despots make bad rulers
> because they cannot exist: they need accurate information about the
> preferences of everybody, which is not possible for any human ruler. The
> version 2.0 scenario assuming the omniscient AI runs into the same problem
> anyway: it would need to handle an amount of information of the same order
> of magnitude as the information processing in the entire society. Hence it
> would itself be a sizeable fraction of society information-wise (and itself
> a source of plenty of input in need of analysis). Given such technology, as
> soon as any other system in society becomes more complex the ruler AI would
> have to become more complex to keep ahead. Again the outcome is that either
> growth must be limited or the society is going to end up embedded within an
> ever more complex system that spends most of its resources monitoring
> itself.
I don't think this necessarily is the case. In my view, the Sysop only
had to be as complex (in terms of intelligence, not capacity) as the
most complex entity in Sysop Space. Everytime it runs into something
new it will as you say need to evaluate it, but after that it will
already have the "recipe" stored for it. What it really comes down
to is perhaps granularity- can the Sysop distribute copies of itself
around to different locales as the Space grows? If so, I don't see the
potential for some kind of complexity explosion. Again it looks to me
more like a relatively stable underlying cost of future society, just
like electricity or government is for us today.
>
> The idea that we need someone to protect ourselves from ourselves in this
> way really hinges on the idea that certain technologies are instantly and
> throughly devastating, and assumes that the only way they can be handled is
> if as few beings as possible get their manipulators on them. Both of these
No actually not. I personally wouldn't like it even if a relatively small
tech catastrophe only killed off 100k people. Also I reiterate the Sysop
Scenario does not limit handling these technologies, it only limits using
them in a certain way against a certain person or group of people, etc.
> assumptions are debatable, and I think it is dangerous to blithely say that
> the sysop scenario is the only realistic alternative to global death. It
I'm waiting to hear other scenarios. Gambling is not one, or at least not
one that isn't quite a bit more debatable and dangerous.
> forecloses further analysis of the assumptions and alternatives by
> suggesting that the question is really settled and there is only one way.
Just to make it clear I do not hold this viewpoint. I am eager to find an
even better theoretical solution. So far I don't see any. Just because we
survived nuclear war (so far) does not mean we will automagically survive
death when 6 billion individuals possess far greater powers. From my
perspective anyone advocating a completely "hands off" attitude to the
future is acting extremely irresponsibly.
> It also paints a simplistic picture of choices that could easily be used to
> attack transhumanism, either as strongly hegemonic ("They want to make a
> computer to rule the world! Gosh, I only thought those claims they are
> actually thinking like movie mad scientists were just insults.") or as
> totally out of touch with reality ("Why build a super-AI (as if such stuff
> could exist in the first place) when we can just follow Bill Joy and
> relinquish dangerous technology?!").
Well you'll be glad to know we pretty much wiped out all references to this
on our website months ago. The new meme is "Transition Guide". Feedback
would be great.
>
> A lot of this was hashed through in the article about nanarchy in Extropy,
> I think. Nothing new under the Dyson.
I don't think I've seen that, does anyone have a quick URL?
>
> Existential risks are a problem, but I'm worried people are overly
> simplistic when solving them. My own sketch above about a self-organised
> critical state is complex, messy and hard to analyse in detail because it
> is evolving. That is a memetic handicap, since solutions that are easy to
> state and grasp sound so much better - relinquishment of technology or
> sysops fit in wonderfully with the memetic receptors people have. But
> complex problems seldom have simple neat solutions. Bill Joy is wrong
> because his solution cannot practically work, the sysop is simple to state
> but hides all the enormous complexity inside the system.
>
Well Anders, it is not as if we here are simpletons ok? We've thought
through these issues as much as anyone else around here. We just came
up with a different and quite possibly superior solution. I think
attacking is because it sounds simple is not very useful. Sometimes
elegant ideas are better. Sometimes leaving things to chaos is bad. I
think the burden of proof is as much on you as on me here, so I at
this point remain unconvinced to say the least that a "messy" future
is either less risky or more desirable.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:01 MDT