From: Anders Sandberg (asa@nada.kth.se)
Date: Sat Jun 15 2002 - 16:06:52 MDT
Gattaca was likely intended as a warning about bad effects of technology
(some removed footage suggests this), but is in its current form both a
watchable film and has a more subtle take. As I see it, it shows that
genetics isn't everything, and that the real trouble with the future
society described is its prejudices - which are a cultural problem
rather than a technological problem. Those prejudices exist today, and
will not vanish even if every genetic test was legislated into oblivion.
Hence the only way of dealing with them is a cultural challenge. Guess
where we come in?
On Sat, Jun 15, 2002 at 04:21:28PM -0400, Brian Atkins wrote:
> Harvey Newstrom wrote:
> >
> > We're talking 22 years minimum if the technology were ready today.
> > We're talking 27-32 years if the technology is available today. We're
> > talking 32-42 years if it has to go through FDA and government
> > approvals. Who knows how many additional years to gain public
> > preference over traditional mating.
> >
> > I doubt that the movie GATTACA could reasonably occur in real life until
> > after 2050.
>
> I hope Hal and everyone else here who still has dreams of "bio" technologies
> having any real chance of coming before AI takes this estimate and sticks
> it deep in their heads and cogitates a bit more on what a truly realistic
> view of the future is likely to be.
Of course, the AI side should think about the history of AI and ask
themselves whether their visions of timescales are any faster. There
might even be the problem that real AI will require a "childhood" of
experience to become useful, which would slow things a lot.
I think one important holy cow to challenge for all of us here on the
list is the "fast transformation assumption": that changes to a trans-
and posthuman state will occur over relatively short timescales and
especially *soon*. While there are some arguments for this that make
sense (like Vinge's original argument for the singularity) and the
general cumulative and exponential feeling of technology, we shouldn't
delude ourselves that this is how things really are. We need to examine
assumptions and possible development paths more carefully.
> Going back to your question the other day Hal about whether we should worry
> more about bio and nano coming first, I hope this helps. As for nano think
> about it like this: we don't really have anything really to worry about
> there until we get the fully fledged final stage of it ("drextech"). What
> comes before that is a long string of intermediate stage technologies which
> will have many spinoffs, including- you guessed it- extremely powerful
> computer technologies. We'll get massively powerful computer hardware well
> before drextech.
The problem isn't gods erupting from our machines, but mold. Industrial
accidents with bio or nano can be very bad, even when the technology is
trivial. Imagine a widely used nanodevice that turns out to have a small
part that is non-biodegradable and jams a certain metabolic chain in
soil bacteria a la microscopic silicosis. That could do terrible damage
without being smart, self-replicating or advanced. Once the problem was
diagnosed we could of course fix the design and start designing the
cleanup, but it could cause serious damage. A few failed rice or wheat
harvests, and we will have trouble.
This is the kind of low-tech, close-to-home problem that often gets
ignored in discussions about grand technological processes. But
accidents shape technology and the perception of them, which in turn
controls what will be developed. There is a reason people accepted the
railroad accidents but abandoned Zeppelins after Hindenburg.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:48 MST