Zero Powers wrote:
> So are you suggesting that, if nukes were handy for many other things
> besides huge explosions, you'd be able to buy them at your local Radio
> Shack?
Well, they might not want to stock them on the shelves, what with the
radiation hazard and all. ;-)
But seriously, look at what has happened with other weapons of mass
destruction. Chemical weapons aren't quite as bad as nukes, but they aren't
anything to sneeze at, and they are actually better tools for a terrorist or
non-suicidal nutcase. Nevertheless, there are thousands of civilian
facilities all over the world that are capable of manufacturing chemical
weapons if their owners so desire. In fact, there are many chemical plants
that actually do manufacture large volumes of highly toxic gasses which
would make effective terrorist weapons. Despite the obvious theoretical
dangers, regulation of the chemical industry is mostly restricted to worker
safety measures (OSHA and the like), rather than keeping the stuff away from
lunatics.
Biological weapons are a similar situation. Biological warfare is
potentially much more dangerous than nuclear weapons, and yet the technology
is in civilian hands at biotech companies all over the world. Once again,
many of these labs are actually used to do genetic engineering of microbes,
and yet the regulatory climate focuses primarily on preventing accidental
releases and ensuring the safety of commercial products.
In both cases the benefits of relatively free deployment of the technology
are so great that efforts at strict control are doomed to failure. Only a
spectacular string of very scary-looking disasters would convince the public
to support strict keep-it-away-from-lunatics regulation, and nothing like
that has happened. Which raises an interesting question - why haven't we
had any serious problems with terrorist use of these technologies?
> So do you think government is stupid enough to avoid regulation of
nanotech
> until *after* everybody already has a near-anything device?
Now, let's not make the genie machine error here. We know better than that.
Early nanotech will use expensive, specialized assemblers guided by arcane
software and requiring extremely complex (and expensive) design efforts.
That means it is only really useful to large organizations that can afford
to hire teams of specialists to run the systems.
As the technology improves it will become cheaper, more flexible, and more
automated, but that still doesn't mean everyone needs their own assemblers.
You are likely to have several intermediate stages, as the cost and
complexity of running an assembler system gradually falls. By the time it
is practical for an individual to have his own system, you've already had
nanotech long enough for major economic and social changes to take place.
Even then, you still can't just make anything you want. Designing and
programming the things you want to build still takes effort (fantastic
amounts of it, if you want to make things like utility fog or smart matter).
I'm sure that AI will gradually take on more and more of this burden as time
goes on, but it isn't going to be an instant transition.
By the time you could have real anything boxes, you are no longer talking
about a society of human beings. The AI technology that powers the anything
box could also be used for uploading, programming sentient AI, enhancing
human intelligence, etc.
So, if you want to talk about human societies with some vague resemblance to
the ones that currently exist, that means you're talking about early
nanotech at most. If you want to talk about a society with more advanced
nanotech and/or AI, we need to take into account the effects of very rapid
economic growth, ubiquitous robotics, virtual worlds, enhanced reality,
moderately advanced physiological reconstruction/enhancement technology, and
all the other fun stuff that is likely to turn the world upside down before
those advanced technologies ever get here.
Billy Brown
bbrown@transcient.com
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:03 MDT