From: Damien Broderick (d.broderick@english.unimelb.edu.au)
Date: Sat Jan 01 2000 - 05:08:18 MST
Thank you, Robert Heinlein, Ursula Le Guin, Phil Dick, Joanna Russ, the
whole strange weird bunch of you who were here before us. The year 2000 -
amazing, really! Where I am it's four in the morning of the first day of
science fiction's future. I just got home from celebrating with friends on
the far side of town, delivered free (for the night) by two electric trams
joined by a slow walk through the centre of a kind of trashed, beer-stinky
Mad Max Melbourne, looking a little as it had four decades ago during the
making of ON THE BEACH, but with more people, most of them young and
beautiful and pretty pissed (that's drunk, not angry). Torn paper
everywhere, smashed glass, police horses stepping politely, pools of puke,
nobody especially nasty nor wildly happy. Just your average post-fireworks
extravaganza to open the fabled year 2000.
And in a few hours time, in the far northern city of Brisbane,
Queenslanders will read in their metropolitan newspaper what I had to say
(obvious to all of us here, but still confronting to most innocents out
there) about the arrival of the future:
========================
Everything you think you know about the future is wrong.
How can that be? Back in the '70s, Alvin Toffler warned of future shock,
the concussion we feel when change slaps us in the back of the head. But
aren't we smarter now? We have wild, ambitious expectations of the future,
we're not frightened of it. How could it surprise us, after *Star Trek* and
*Star Wars* and *Terminator* movies and *The Matrix* and a hundred computer
role-playing games have domesticated the 24th century, cyberspace virtual
realities, and a galaxy far, far away?
Actually, I blame glitzy mass-market science fiction for misleading us.
They got it so wrong. Their enjoyable futures, by and large, are as
plausible as 19th century visions of tomorrow, with dirigibles filling the
skies and bonneted ladies in crinolines tapping at telegraphs.
Back in the middle of the twentieth century, when the futuristic stories I
read as a kid were being written, most people knew `that Buck Rogers stuff'
was laughable fantasy, suitable only for children. After all, it talked
about atomic power and landing on the Moon and time travel and robots that
would do your bidding even if you were rude to them. Who could take such
nonsense seriously?
Twenty years later, men *had* walked on the moon, nuclear power was already
obsolete in some countries, and computers could be found in any university.
Another two decades on, in the '90s, probes sent us vivid images from the
solar system's far reaches, immensely powerful but affordable personal
computers sat on desks at home as well as work, the human genome was being
sequenced, and advanced physics told us that even time travel through
spacetime wormholes was not necessarily insane (although it was surely not
in the immediate offing).
So popular entertainment belatedly got the message, spurred on by
prodigious advances in computerised graphics. Sadly, the script writers and
directors still didn't know a quark from a kumquat, a light-year (a unit of
interstellar distance) from a picosecond (a very brief time interval). With
gusto and cascades of light, they blended made-up technobabble with
exhilarating fairy stories, shifting adventure sagas from ancient legends
and myth into outer space. It was great fun, but it twisted our sense of
the future away from an almost inconceivably strange reality (which is the
way it will actually happen) and back into safe childhood, that endless
temptation of fantastic art.
Maybe you think I'm about to get all preachy and sanctimonious. You're
waiting for the doom and gloom: rising seas and greenhouse nightmare,
cloned tyrants, population bomb, monster global mega-corporations with
their evil genetically engineered foods and monopoly stranglehold on the
crop seeds needed by a starving Third World. Wrong. Those factors indeed
threaten the security of our planet, but not for much longer (unless things
go very bad indeed, very quickly). No, what's wrong with the media images
of the future isn't their evasion of such threats. It's their laughable
conservatism.
The future is going to be a fast, wild ride into strangeness. And most of
us will still be there as it happens.
This accelerating world of drastic change won't wait until the 24th
century, let alone the year 3000. We can expect extraordinary disruptions
within the next half century. Many of those changes will probably start to
impact well before that. By the end of the 21st century, there might well
be no humans (as we recognise ourselves) left on the planet - but nobody
alive then will complain about that, any more than we now bewail the loss
of Neanderthals.
That sounds like a rather tasteless paradox, but I mean it literally: many
of us will still be here, but we won't be human any longer - not the
current model, anyway. Our children, and perhaps we as well, will be
smarter. In September, 1999, molecular biologists at Princeton reported
adding a gene for the extra production of NR2B protein to a strain of mice.
The improved brains of these `Doogie mice' used NR2B to enhance brain
receptors, helping the animals solve puzzles much faster. Humans use an
almost identical protein.
Nor will we be the only intelligences on the planet. By the close of the
21st century, there will be vast numbers of conscious but artificial minds
on earth. How we and our children get along with them as they arrive out of
the labs will determine the history of life in the solar system, and maybe
the universe.
I'm not making this up. Dr Hans Moravec, a robotics pioneer at Carnegie
Mellon University in Pittsburgh, argues in *Robot* (Oxford University
Press, 1999) that we can expect machines equal to human brains within 40
years at the latest. Already, primitive robots operate at the level of
spiders or lizards. Soon a robot kitten will be running about in Japan,
driven by an artificial brain designed and built by Australian Dr Hugo de
Garis. True, it's a vast leap from lizard to monkey and then human, but
computers are *doubling* in speed and memory *every year*.
This is the hard bit to grasp: with that kind of annual doubling in power,
you jump by a factor of 1000 every decade. In 20 years, the same price
(adjusted for inflation) will buy you a computer a *million* times more
powerful than your current model.
At the end of the 1990s, the world's best, immensely expensive
supercomputers perform several trillion operations a second. To emulate a
human mind, Moravec estimates, we'll need systems 100 times better.
Advanced research machines might meet that benchmark within a decade, or
sooner - but it will take another 10 or 20 years for the comparable home
machine at a notepad's price. Still, before 2030, expect to own a computer
with the brain power of a human being. And what will *that* be like? If
software develops at the same pace, we will abruptly find ourselves in a
world of alien minds as good as our own.
Will they take our orders and quietly do our bidding? If they're designed
right, maybe. But that's not the kicker. That's just the familiar world of
sci-fi movies with clunky or sexy-voiced robots. The key to future change
comes from what's called `self-bootstrapping' - machines and programs that
modify their own design, optimise their functioning, improve themselves in
ways that limited human minds can't even start to understand. de Garis
calls such beings `artilects', and even though he's building their
predecessors he admits he's scared stiff.
By the end of the 21st century, Ray Kurzweil expects a merging of machines
and humans (*The Age of Spiritual Machines*, Allen & Unwin, 1999), allowing
us to shift consciousness from place to place. He's got an equally
impressive track record, as a leading software designer and specialist in
voice-activated systems. His timeline for the future is even more
hair-raising that Moravec's. In a decade, we'll have desktop machines with
the grunt of today's best super-computers, a trillion operations a second.
Forget keyboards - we'll speak to these machines, and they'll speak back in
the guise of plausible personalities.
By 2020, a Pentium equivalent will equal a human brain. And now the second
great innovation kicks in: molecular nanotechnology (MNT), building things
by putting them together atom by atom. I call that `minting', and the
wonderful thing is that a mint will be able to replicate itself, using
common, cheap chemical feedstocks. Houses and cars will be compiled
seamlessly out of diamond (carbon, currently clogging the atmosphere) and
sapphire (aluminium), because they will be cheap appropriate materials
readily handled by mints.
Until recently, nanotechnology was purely theoretical. The engineering
theory was good, but the evidence was thin. At the end of November, 1999,
researchers at Cornell University announced in the journal *Science* that
they had successfully assembled molecules one at a time by chemically
bonding carbon monoxide molecules to iron atoms. This is a long way from
building a beef steak sandwich in a mint the size of a microwave oven
powered by solar cells on your roof (also made for practically nothing by a
mint), but it's proof that the concept works.
If that sounds like a magical world, consider Kurzweil's 2030. Now your
desktop machine (except that you'll probably be wearing it, or it will be
built into you, or you will be absorbed into it) holds the intelligence of
1000 human brains. Machines are plainly people. It might be (horrors!) that
smart machines are debating whether, by comparison with their lucid and
swift understanding, *humans* are people! We had better treat our mind
children nicely. Minds that good will find little difficulty solving
problems that we are already on the verge of unlocking. Cancers will be
cured, along with most other ills of the flesh.
Aging, and even routine death itself, might be a thing of the past. In
October, 1999, Canada's Chromos Molecular Systems announced that an
artificial chromosome inserted into mice embryos had been passed down, with
its useful extra genes, to the next generation. And in November, 1999, the
journal *Nature* reported that Pier Giuseppe Pelicci, at Milan's European
Institute of Oncology, had deactivated the *p66shc* gene in mice - which
then lived 30 percent longer than their unaltered kin, without making them
sluggish! A drug blocking *p66shc* in humans might have a similar
life-extending effect.
As well, our bodies will be suffused with swarms of medical and other nano
maintenance devices. Nor will our brains remain untouched. Many of us will
surely adopt the prosthetic advantage of direct links to the global net,
and augmentation of our fallible memories and intellectual powers. This
won't be a world of Mr Spock emotionless logic, however. It is far more
likely that AIs (artificial intelligences) will develop supple, nuanced
emotions of their own, for the same reason we do: to relate to people, and
for the sheer joy of it.
The real future, in other words, has already started. Don't expect the
simple, gaudy world of *Babylon-5* or even *eXistenZ*. The third millennium
will be very much stranger than fiction.
============================
Damien Broderick [and a bloody bonza future to youse all]
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:06:14 MST