From: Brian Atkins (brian@posthuman.com)
Date: Fri Aug 03 2001 - 01:03:44 MDT
Reason wrote:
>
> --> Eugene Leitl
>
> > On Thu, 2 Aug 2001, Brian Atkins wrote:
> >
> > > Actually it may be possible to escape this paradox via technology.
> > > Ever read about the Sysop Scenario?
> >
> > I personally much prefer the Santa scenario, reindeer, elves, sleigh, and
> > all.
> >
> > Really, changing people's attitudes is hard work. By mentioning
> > implausible magic fairy dust scenarios you're not helping.
>
> The Sysop Scenario is just a technological manifestation of what I was
> talking about; that a libertarian society can't exist because someone has to
> coerce all the people who would try to set up a non-libertarian system. In
> the Sysop Scenario you could have libertarian subcomponents of society,
> because the Sysop would either a) change the participating intelligences
> such that they could deal with this, or b) prevent any drift away from
> libertarianism on a case by case basis.
>
> [None of which much helps a discussion of how to set up a libertarian
> society now].
>
> Many of Modesitt's postulated future societies revolve around point b) [but
> with a human organization in control, checked by some form of feedback
> mechanism from corruption, such as total transparency through nanotech
> monitoring of everything, or genetic modification of the way in which people
> think, etc] -- rigid control of what at first might appear to be a "free"
> society and rigid control of the technologies that would allow any one
> person to go out any wreck things.
>
I'm not sure you have a complete grasp of the Sysop Scenario. It is not
about "forcing" anything at all. No one is forced to modify their mind,
or live a libertarian life. The whole point is everyone is free to do
what they want, live the way they want, etc. It is volition-based. It
is about protecting what you want and how you want to live.
I would not call it a libertarian or any other society. Some people may
form governments or societies inside of it, others may not participate at
all and focus on individual pursuits. Yet they all can still interact with
other without any possibility of stepping on each others toes (unless they
want their toes stepped on).
I have yet to see a better solution to the issue. At some point the matter
(as in atoms) must fall under someone's control, and personally I don't
relish the idea of having to constantly protect myself from everyone else
who can't be trusted with nanotech and AI. All it takes is one Blight to
wipe us out. That kind of threat does not go away as humans progress to
transhumanity, rather it increases in likelihood. What is the stable state
if not Sysop or total death? There may be some other possibilities, can
you name some?
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:09:24 MST