From: Brian Atkins (brian@posthuman.com)
Date: Tue Aug 20 2002 - 20:32:18 MDT
Lee Daniel Crocker wrote:
>
> > (Harvey Newstrom <mail@HarveyNewstrom.com>):
> >
> > >>(Michael Wiik <mwiik@messagenet.com>):
> > >>
> > >>That is, I see a libertarian utopia as either solitary (where we each
> > >>exist in different universes) or highly chaotic and unstable (anything
> > >>goes).
> > >
> > >You say "chaotic and unstable" as if those are bad things. Only dead
> > >things are stable. Life is chaos, uncertainty, risk. If your "utopia"
> > >is safe and clean and easy, I'll have no part of it.
> >
> > Whoa! I assume this is over-stated. Are you saying that
> > safe/clean/easy is unlikely, is dangerous, has side-effects, is too
> > boring, or what? A blanket statement that you reject safe/clean/easy
> > solutions seems to merit further explanation. If something were really
> > simple/safe/clean/easy, why would you reject it?
>
> I wouldn't say "overstated", but it is, as much of what I write is,
> soundbite-ish. Even so, my image of hell is a life without risk.
> Everything that makes life interesting and valuable and desirable is
> unpredictable. If the world ever became a static utopia where there
> was nothing to struggle against, nothing to strive for, no dangers to
> overcome, what's the point of my existence? There's nothing wrong
> with that struggle being /toward/ safer, cleaner, easier, etc.; it's
> just that I don't want to ever reach a point where I'm "done". Life
> should be like a research project--every new discovery should bring
> up more questions than it answers.
>
I know everyone seems to hate it for one reason or another, but the
thread is crying out for a mention of the Sysop Scenario. Lee is
perfectly welcome to opt out of any and all "protections" the Sysop
could offer if he really has a deathwish.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:16:18 MST