From: Brian Atkins (brian@posthuman.com)
Date: Thu Jun 20 2002 - 19:55:56 MDT
Anders Sandberg wrote:
>
> On Tue, Jun 18, 2002 at 11:20:00PM -0400, Brian Atkins wrote:
> > Anders Sandberg wrote:
> > >
> > > Fast transformation scenarios tend to be very inhomogeneous. A small
> > > subset of the world rushes away, and differences increase exponentially.
> > > This produces disparities that are likely sources of aggression. Slower
> >
> > Wow, he says it like he already lived through it. Or is this just based
> > on some simplistic math model you cooked up?
>
> I have just spent a number of years on this list, listening to the
> scenarios people come up with, and reading sf novels dealing with
> singularity-related issues. Again, my criticism is not about the
> uncertainties of real technical development but about how people think
> about it.
This is not a real answer... you made a fairly strong assertion.
>
> > I know that if I was part of
> > a small subset of the world that had somehow managed to quickly transform
> > ourselves into superintelligent entities that one of the first things I'd
> > want to do would be to offer such capabilities to everyone else.
>
> Sure, there would be extrosattvas. But would these help others at a
> speed comparable to the transformation itself? Isn't it likely that
> help would be far slower, if only due to the usual limitations of
> moving atoms around?
There may not be that much in the way of atoms to move around. I'm sure
you've read Diaspora, which is actually a rather slow take on the issue
of offering uploading to the world simultaneously. I'm sure if you put
some thought into it you can come up with much faster ways to do it...
especially when you're superintelligent.
>
> > It's the technology created that drives a fast transformation that also
> > makes it possible to offer the rapid transformation to everyone equally
> > and quickly.
>
> Note that you assume technology is the driver. What about economics
> and culture? Even the "AI in a box" scenario contains elements of
> these, since it is still resource-constrained and will act from its
> knowledge and assumptions.
Economics don't really come into this picture when we have full nanotech.
We can probably grow an uploading machine out of the local materials outside
every town on Earth for free. Computronium to run them is essentially free
as well. Where exactly do you see economics coming into the picture when we
are talking SIs and full nano? The amount of "money" required to upload
everyone is practically zero compared to the wealth available at the time.
And anyone who uploads can be almost instantly brought "up to date" if
they want. No problemo most likely.
Culture is the issue that will hold many people back from taking advantage
of such a scenario, but there's not much technology can do about that other
than attempts to persuade them that aren't deemed to be taking advantage of
their low intelligence. Actually I can't say that for sure since there is
always the chance the group of uploaders may decide that forcibly uploading
everyone is preferrable for some reason I can't envision right now. At any
rate, if this does cause any lack of participation or anger on the part of
people "left behind" they have no one to blame but themselves. I don't see
this as an important reason to postpone a potential Singularity. If we had to
wait until everyone was comfy with the idea millions of people will die in
the meantime.
>
> > > I think we need policies to enable better fielding of technologies. These
> > > policies doesn't have to be top-down laws, they could just as well be in
> > > the form of insurance. If you have to pay for the risks you induce in
> > > others by insurance premiums, then very risky development will be done
> > > more carefully, or moved elsewhere like in space. In many cases I think
> > > we actually need to help technology to advance more freely than faster:
> > > we need a broader range of options to test and choose from. This also
> > > gives us more data to build later decisions on.
> >
> > This sounds remarkably close to a Luddite fantasy, and seems certain to
> > halt development of any of the "big three" technologies that people like
> > Bill Joy worry about. What company is going to pay to setup a lab in
> > space to develop nanotech, AI, or advanced biotech? The only ones with
> > the funds would be large governments.
>
> If the risks involved are judged by the market (note that I do not
> subscribe to the notion of centralized experts making the risk
> judgements) to be so great that you have to put the lab in orbit, then
> maybe it is a good idea?
Perhaps an interesting example is the skyrocketing insurance rates many
businesses are facing here for terrorism insurance? It's to the point
the government has to get involved just to allow important services like
airlines to continue service. And these massive fees are just for the
relatively small destructive potentials (and chances) that might occur.
Have you done any kind of research to guesstimate what kind of insurance
costs a nanotech startup like Zyvex might be required to pay under your
idea?
Furthermore, what basis do you have for even proposing the idea that
businesses be /forced/ to buy such insurance? Shouldn't a business be
free to decide whether it wants to protect itself from liabilities or
not?
>
> The Asilomar resolution about genetic engineering and the moratorium
> in the early 70's was a sane strategy when dealing with what *might*
> be an existential risk. But it was not the luddite/Joy approach of
> saying "this could be risky, so it should not even be investigated".
> Instead it suggested that we progress carefully, especially trying to
> get the information needed for a proper risk assessment which can be
> used to adjust the regulations continously. This is a far more
> rational strategy than stopping all research or not taking any
> precautions with it. The problem genetic engineering ran into turned
> out to be more about public perceptions and PR than safety.
>
Well we already have nanotech guidelines from Foresight and AI guidelines
from SIAI including ideas on how to carefully proceed in developing a
seed AI as well as ideas on how to test it before release. These ideas
will be improved as time goes on (especially if more people give us really
good criticism!). Isn't this good enough? What exactly do you need to see
before you feel it would be safe to allow real AI development?
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:56 MST