Re: Gattaca on TV this weekend

From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Jun 20 2002 - 17:21:57 MDT


On Tue, Jun 18, 2002 at 11:20:00PM -0400, Brian Atkins wrote:
> Anders Sandberg wrote:
> >
> > Fast transformation scenarios tend to be very inhomogeneous. A small
> > subset of the world rushes away, and differences increase exponentially.
> > This produces disparities that are likely sources of aggression. Slower
>
> Wow, he says it like he already lived through it. Or is this just based
> on some simplistic math model you cooked up?

I have just spent a number of years on this list, listening to the
scenarios people come up with, and reading sf novels dealing with
singularity-related issues. Again, my criticism is not about the
uncertainties of real technical development but about how people think
about it.

> I know that if I was part of
> a small subset of the world that had somehow managed to quickly transform
> ourselves into superintelligent entities that one of the first things I'd
> want to do would be to offer such capabilities to everyone else.

Sure, there would be extrosattvas. But would these help others at a
speed comparable to the transformation itself? Isn't it likely that
help would be far slower, if only due to the usual limitations of
moving atoms around?

> It's the technology created that drives a fast transformation that also
> makes it possible to offer the rapid transformation to everyone equally
> and quickly.

Note that you assume technology is the driver. What about economics
and culture? Even the "AI in a box" scenario contains elements of
these, since it is still resource-constrained and will act from its
knowledge and assumptions.

> > I think we need policies to enable better fielding of technologies. These
> > policies doesn't have to be top-down laws, they could just as well be in
> > the form of insurance. If you have to pay for the risks you induce in
> > others by insurance premiums, then very risky development will be done
> > more carefully, or moved elsewhere like in space. In many cases I think
> > we actually need to help technology to advance more freely than faster:
> > we need a broader range of options to test and choose from. This also
> > gives us more data to build later decisions on.
>
> This sounds remarkably close to a Luddite fantasy, and seems certain to
> halt development of any of the "big three" technologies that people like
> Bill Joy worry about. What company is going to pay to setup a lab in
> space to develop nanotech, AI, or advanced biotech? The only ones with
> the funds would be large governments.

If the risks involved are judged by the market (note that I do not
subscribe to the notion of centralized experts making the risk
judgements) to be so great that you have to put the lab in orbit, then
maybe it is a good idea?

The Asilomar resolution about genetic engineering and the moratorium
in the early 70's was a sane strategy when dealing with what *might*
be an existential risk. But it was not the luddite/Joy approach of
saying "this could be risky, so it should not even be investigated".
Instead it suggested that we progress carefully, especially trying to
get the information needed for a proper risk assessment which can be
used to adjust the regulations continously. This is a far more
rational strategy than stopping all research or not taking any
precautions with it. The problem genetic engineering ran into turned
out to be more about public perceptions and PR than safety.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:56 MST