From: Samantha Atkins (samantha@objectent.com)
Date: Mon Mar 11 2002 - 01:20:38 MST
Eugene Leitl wrote:
> On Fri, 8 Mar 2002, Colin Hales wrote:
>
>
>>
>>Eugene Leitl wrote........
>>
>>>........botton of this gravity well, we're extremely vulnerable.
>>>There's a number of things which need to be done to make us less
>>>vulnerable, some of them straightforward, some less so. Some
>>>of them are protective (enhance people) some are offensive (addressing
>>>the scenarios of emergence/creation of AI).
>>>I can give you a list.
>>>
>>>
>>yes pls!
>>
>
> A few things off the top of my head (this is what a transhumanist think
> tank is supposed to be paid to be doing).
>
> * no evolutionary algorithm AI experiments unless air gapped and following
> a containment SOP. Do not reconnect or reuse the components outside
> of containment until wiped clean (again following through a SOP, at
> very least power down and do a full state wipe). Continuously revise
> SOPs.
>
Really insufficient. Did you read James Hogan's "Two Faces of
Tomorrow"? A flowery ending for an intractable problem.
> * harden the network by using provably secure protocols and introduce
> adaptive diversity into "code" via ALife route. Encourage h4x0rs to do
> their worst. Pay $$$ bounty to those who can break the most. Try to
> create a fully polymorphic worm with a probabilistic exploit seeker,
> and let it loose (after you've let it run in the lab, and patched up
> most of the holes). Launch big R&D program to learn to mutate machine
> code and FPGA gates. Sandbox anything above the protocol layer (try to
> make a protocol simple enough to be cast into hardware, geodetic
> routing could do this). Put watchdogged intrusion alert on top of this,
> forcing the machine back into default sane state (also for enduser
> boxes).
>
How will you limit this "adaptive diversity" for adapting into
something smarter than you feel safe having around? What keeps
this worm from effectively growing in intelligence? Won't the
R&D program likely strengthen the possibility of self-enhancing
AI that you seem so afraid of? Total security is a myth.
Doesn't mean we shouldn't try, just that we should never expect
to totally suceed indefintely.
> * not make an AI any smarter than a chimp or a dog, until we're good and
> ready
>
What does this readiness look like? Won't we likely be dead due
to being inadequate to the increasing complexity and speed of
change before we are "ready"?
> * track AI researchers, cluster sales, and what these clusters are used
> for
>
NO. Freedom is more important than tracking any and everyone
and everything that might be a little scary. If these possibly
scary people have to get permission for everything the thing of
or want to try outside of what is already accepted then you can
kiss much progress at all goodbye.
> * boost uploading research (freeze/slice/scan, neural emulation)
>
And when you find this approach is just not very tractable
without AI level intelligences (if then)?
> * boost self-rep space R&D, establishing bridgehead off-planet
>
How will expanding the territory you need to police to feel
secure help you?
> * once you have uploading, let a small initial group act as
> regulators, create a large scale Introdus programme. As soon as as many
> as possible are hardened maximally, remove above constraints
>
You have a strong assumption that uploading is the only way to a
sane future. I have my doubts that minds optimized to and by
and built for 3-d meat spaces will make very healthy upload
citizens. I have even more doubts that getting uploading to
work reasonably will develop enough efficient intelligence fast
enough to deal with the increasing demand - particular if AI is
largely put on hold.
In the short term I would put a lot more energy into human
augmentation, human-computer interface and symbiosis and
developing medical NT than I would into uploads. I believe that
is a much faster way to maximizing ourselves, our survivability
and our intelligence with relatively low impact on security.
I would also put energy into shifting the socio-economic
assumptions of the world so that the advantage of one and
especially the increasing ability of one is not seen as a threat
instead of a boon to others. Without that augmentation will be
fought tooth and nail, much less uploads.
- samantha
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:55 MST