From: Brian Atkins (brian@posthuman.com)
Date: Sat Jun 22 2002 - 10:40:43 MDT
Eugen Leitl wrote:
>
> On Thu, 20 Jun 2002, Brian Atkins wrote:
>
> > This is not a real answer... you made a fairly strong assertion.
>
> It doesn't require a Ph.D. to realize that a fast change is less than
> beneficial for slow-adapting agents. It's rather well documented that
> grand extinctions (6th, and counting) are causally linked to rapid changes
> in the environment.
Eugen I know you are smarter than this. We ain't talking about a meteor
disaster or ice age here- this is fast change driven by smarter than
human intelligence. There is no reason to believe that anyone who decides
to stay behind as normal humans will have any negative issues to deal
with. They likely will live in a world that is protected from the kinds
of natural disasters you mention, a world of free basic resources where
they can pursue what they wish freely.
>
> It seems the only validation acceptable to you is actually hindsight
> (let's precipitate the Singularity, and see). Given the risks, that's
> simply unacceptable.
Eugene you advocate creating a Singularity-ish future where human uploads
drive the change. There is inherent risk in that alone, not to mention
that while we wait for plain ole humans to slowly develop that uploading
technology over a long period of time we are stuck here with various
existential risks still hanging over our head (plus 150k deaths/day). Your
answer to that appears to be various fantasies involving space colonization
and relinquishment which have little actual chance of working or becoming
possible in the next 50 years.
I present an alternate scenario involving an AI technology that can
potentially be developed much earlier with the same or lower risk compared
to uploading humans, and so far I am not getting any good criticism to
it. Most people prefer to bash it using irrelevant arguments rather than
actually read the documentation involved in my proposed experiment protocol.
>
> > Economics don't really come into this picture when we have full nanotech.
>
> This is an assertion based on pure wishful thinking. Nanotechnology and
> hypereconomy does not put an end to value of all goods. The value is never
> zero even for the goods that are cheap. The value can become arbitrarily
> high (in terms of current value) for rare items.
I'm still not getting it. Perhaps you can explain in better detail and
less hand waving what exactly prevents me as a SI (or even less) from
spending my time to develop an uploading system that can be produced using
replicating nanotech and local materials available everywhere on Earth, and
then dropping this on Earth as a gift for free.
>
> > We can probably grow an uploading machine out of the local materials outside
> > every town on Earth for free. Computronium to run them is essentially free
>
> It's not actually free, because those resources (designing, planning,
> deploying, energy, space) are unavailable for other, competing projects.
> The bottleneck would be design of the uploading box. The S in SI doesn't
> stand for Santa Claus. If it can do something, it doesn't mean it is
> motivated to do that.
I already stated I would be motivated to do such, so we can scratch that
off the list. As an entity likely running faster than real time with the
likely ability to spawn off other instances of myself or semi intelligent
design processes I have plenty of free time for designing and planning
so we scratch those off the list. The amount of energy and matter under
my control at this point is likely way way more than needed for this
project, so scratch that off. As for deploying and growing it, that shouldn't
be a problem either. What am I missing?
>
> > as well. Where exactly do you see economics coming into the picture when we
> > are talking SIs and full nano? The amount of "money" required to upload
>
> Economics never even leaves the picture. Competing SIs are engaged in
> activities. If you're not an SI you're a negligible player on such
> markets. At best you're a minor nuisance, at worst you've died when the
> SIs came to power.
Believe it or not (if you can de-anthropomorphize for just one nanosec),
some superintelligent entities may actually care about plain ole humans
left on Earth more than than any given human is capable of right now. Feel
free to give a detailed answer as to why your scenario is the One True
SI Future.
>
> > everyone is practically zero compared to the wealth available at the time.
> >
> > And anyone who uploads can be almost instantly brought "up to date" if
> > they want. No problemo most likely.
>
> We don't even know we can do uploads yet. Simulating all of the biology
> accurately is out of question (an issue of both the model accuracy and
> resource requirements), and it is not obvious we can make
> self-bootstrapping abstracted models. We clearly can't make glib
> statements about properties of such uploads, their role and value in the
> new world order.
Non-sequitur? This whole discussion we're having is already assuming we can
develop some kind of FTA technology. If you want to start a discussion about
whether uploading is even possible start another thread.
>
> > Culture is the issue that will hold many people back from taking advantage
> > of such a scenario, but there's not much technology can do about that other
> > than attempts to persuade them that aren't deemed to be taking advantage of
> > their low intelligence. Actually I can't say that for sure since there is
> > always the chance the group of uploaders may decide that forcibly uploading
> > everyone is preferrable for some reason I can't envision right now. At any
>
> I can think of a reason: preventing their impending extinction. Like
> hauling away live frogs in buckets from a habitat about to be bulldozered.
Remind me not to be around after you upload unless I have a lot of powerful
friends with me or a Sysop.
>
> > rate, if this does cause any lack of participation or anger on the part of
> > people "left behind" they have no one to blame but themselves. I don't see
> > this as an important reason to postpone a potential Singularity. If we had to
> > wait until everyone was comfy with the idea millions of people will die in
> > the meantime.
>
> Which is why we need validation and deployment of life extension and
> radical life extension technologies on a global scale.
I don't think the timeframe for humans to conquer all disease and aging
is anytime soon.
>
> > Well we already have nanotech guidelines from Foresight and AI guidelines
> > from SIAI including ideas on how to carefully proceed in developing a
>
> The SIAI "guidelines" and Foresight guidelines are simply not in the same
> league. (I'm being polite here).
Did something change or have you still not even read our work?
>
> I'm not aware of a single concise list of don'ts in AI development which
> is accepted/peer reviewed. Right now most serious AI researchers would
> claim that this is premature, considering the state of the art.
Last I looked you aren't a "serious AI researcher" yourself, so I'm not
sure how valuable your opinions are in this area. The few AI people out
there who seem to be aware of the potential of seed AI and are Singularity-
aware (Goertzel, Kurzweil, etc.) have made statements to the effect that
this work is NOT premature. How could it /possibly/ be premature when we
have people out there right now coding what they claim has seed AI potential?
I'm very glad at least our organization is attempting to reduce this risk,
although I wish there were more people working on the issue.
>
> > seed AI as well as ideas on how to test it before release. These ideas
> > will be improved as time goes on (especially if more people give us really
> > good criticism!). Isn't this good enough? What exactly do you need to see
> > before you feel it would be safe to allow real AI development?
>
> Human competitive general AI is never safe. Especially, if it can profit
> from positive feedback in self-enhancement. We humans cannot do this
> currently, so we would be facing a new competitor with a key advantage.
Again with the iron-clad statements with no backing basis. Last I checked
you had no matching proof of the impossibility of Friendly AI.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:58 MST