Re: The World in 2050: an essay

From: Adrian Tymes (wingcat@pacbell.net)
Date: Mon Aug 14 2000 - 07:53:53 MDT


[Apologies if this gets sent more than once. Mailer error.]

"Bostrom,N" wrote:
>
> I've written en essay for the Economist competition. The dialogue-form is
> probably not what they had in mind, but I'll submit it anyway. Comments or
> suggestions for improvements are welcome! (I'm sending it off on Sunday.)

My apologies for not getting to this until today. I was busy with my
own entry.

> The essay is at:
>
> http://www.hedweb.com/nickb/2050/world.doc
> <http://www.hedweb.com/nickb/2050/world.doc>

And here's the revised version of my essay that I sent in. Given as
there are multiple prizes, may we both win.

---
Summary:
     A greater understanding of the human brain is but one of the
advances likely to be available in 2050.  However, the abilities it will
grant, both on its own and in conjunction with other technological and
social trends, have the ability to reshape society much like computers
have.  This essay describes one way in which the technology may unfold,
from the viewpoint of someone who watched it happen.
---
UNIVERSITY APPLICATION ESSAY AX-G328904
SUBMITTED: FEB. 15, 2050
     "Specialist" is such a dirty word.  I prefer "focused", because
that is what I am guilty of: focusing on one subject.  I believe that,
with the help of modern technology, my studies of my chosen field may be
the last that any human will ever have to make, so that I may be an
unsurpassed master of my field for all time.
     Although this may seem unfashionable at the midpoint of the 21st
century, I assure you that specialization is a practice that was quite
popular up until recently.  Indeed, until nearly the end of the 20th
century, it was possible for people even in the most technologically
advanced nations to learn only a few basic facts about how the world
worked, stop learning by their early 20s, and live full and productive
lives.  Only the elite - scientists, engineers, chief politicians and
businessmen, and a few others - needed much more information to do their
jobs.
     Even then, some people advocated lifelong learning as a path to a
fuller life, but this movement never really caught on until recently.
When the general public was forced to choose between constantly
improving their minds and living in crushing poverty, most chose to keep
learning.
     One of the factors contributing to this development was the rate of
increase of knowledge available to humanity.  It is easy to assume that
the Information Revolution, fueled by an ever-increasing amount of
computational power starting in the latter half of the 20th century, was
the only cause.  However, this was not the case.  Another factor was the
rising human population, which meant that more people were creating or
discovering new ideas.  Computer networks merely allowed these ideas to
be communicated easily enough that most people became aware of the
amount of knowledge that was out there, where they had formerly assumed
that there were no answers to most of their questions, or that the
answers were practically impossible to find.
     Early attempts to learn significant fractions of this information
led to a phenomenon known as "information overload".  It was like a
perpetually starving band of nomads suddenly arriving at an
all-you-can-eat, infinitely stocked buffet.  Within that metaphor, it
should come as no surprise that most "overfed" and became proverbially
sick.  A few, either nearing this stage or seeing many of their comrades
fall to it, looked for ways to better handle this glut.  Various methods
of managing this information were tried, most essentially just
predigesting the "meal" at the expense of having direct access to some
details - or, to further the analogy, at the cost of some nutrients.
But these were all limited by the modes of communication to the end
destination: the human consciousness.
     Which led to the second factor: cybernetics.  This term used to
refer only to certain types of control systems, but around the early
1980s it acquired a pop culture definition of direct neural interface
between biological, especially human, nervous systems and any man-made
machine, especially computers and computer-controlled devices.  This
definition has now forced its way into mainstream academic use as well,
although more technically accurate names have been proposed.
     Experiments in cybernetics date back to ancient times, depending on
how loosely one defines the term, but the field only really got going in
the 1990s with the commercial introduction of chips designed to
stimulate a neuron depending on signals from some electronic device, or
vice versa.  These devices were originally intended to restore sensory
and control connections to people who had lost one or more limbs and
replaced them with prosthetics, to allow them to use these limbs as they
might otherwise have used their natural ones.
     The application of cybernetics to help people understand and make
use of information might seem obvious to us, but evidence of this
application is everywhere in our society.  At the time, the disastrous
results of various historical failures to improve humanity - for
instance, the eugenics movement that peaked in the early 20th century -
caused an embarrassed silence whenever further attempts at improving
humanity was mentioned.  Further, the aforementioned information
overload, combined with widespread negative sentiment towards those who
were handling information well enough to reap huge profits - financial,
emotional, spiritual, and otherwise - caused a series of popular
backlashes against many new technologies.  A backlash usually started
with a legitimate complaint about a particular product or service, but
took on a life of its own that refused to acknowledge when the original
issue was addressed, with participants continuing to complain as if the
complaints had never been answered.  Before long, these backlashes
became indiscriminate and uncompromising, running on instinctive fear of
anything unknown, enough to cause a backlash of their own.
     Perhaps the best example is that of the ecoterrorists who raided a
US government laboratory that was deliberately poisoning trees in order
to test possible cures for Anthrax-b.  The laboratory was in a race
against time to save the 10041st Armored Battalion, who had been exposed
to Anthrax-b in the process of preventing the Golden Jihad movement from
"purifying" India of their enemies.  The 10041st was visiting on the
night of the raid, preparing to go into quarantine if the Anthrax-b
could not be cured before it became contagious.  When the raiders
identified themselves, brandished guns, and asked everyone in the lab to
leave so as not to be hurt when they burned the lab down, the 10041st
killed them in a brief firefight.  A show trial absolved the 10041st of
murder charges on the grounds that they were protecting their only hope
for life, therefore to defend the laboratory was to defend their own
lives.  The ramifications of this legal precedent, and similar ones in
the EU, Japan, Russia, Australia, Chile, South Africa, and other
countries, are still being debated by legal scholars.
     However, the immediate political effect was to spark growing
resentment of the anti-technology movements.  One of the central theses
of these movements was a fear of these technologies being used to alter
human beings; thus, resentment of everything the movements stood for
gave political support to all kinds of research explicitly designed to
improve humanity.  This lead directly to the EU commissioning the third
enabling factor: the Human Neural Translation Project (HNTP).  In the
fashion of the earlier Human Genome Project, this was designed to create
a generic map of the human brain and which patterns of neural impulses
mapped to which thoughts.  This map would allow implanted devices to
send and read ideas, memories, and sensations to and from the human
mind.  As with the Human Genome Project, a decent amount of this
project's work had already been done.  It was known, for instance, that
only a generic map could be created, for each brain differed in some
details from every other brain.  Cybernetic device makers' widespread
use of "training scripts", which adapted each individual device to its
user over time, meant that only a generic map was needed for these
devices to use as a starting point.
     Even before the HNTP announced official completion, products were
already hitting the market that allowed people to download knowledge,
storing information that was not currently in use and prompting the mind
with ideas relevant to whatever was currently being contemplated.
Practically all versions of these products were made modular, so that
only the initial installation needed any surgery; upgrades could be
delivered either via signals transmitted by induced current or, once
someone figured out how to make plugs that could breach the skin without
risk of infection, direct cable link.  However, all of these versions
had the same module installed by default, which gave users the basics of
logic and how to learn new information.  It is widely suspected that the
vendors all copied the same module, but there are no records of who
might have written it, and no one has taken credit.  The information
modules we take for granted today, with packages of skills and knowledge
available on demand, all have the same basic architecture as this
default logic module.
     The ability to download knowledge was hailed at the time as the
start of a new golden era of science.  Critics highlighted numerous
problems with this idea, most notably the fact that utopia had not yet
arrived.  However, it should be pointed out that, as has been the case
every decade since at least the 1990s, we now live in an era of
unprecedented wealth and enlightenment, and all signs are that the next
decade will be even better in these respects.
     People are living longer, are healthier, and have more productive
lives; some figures indicate that the average expected life of a human
is growing about one year every year, leading many to ponder whether we
are now practically immortal or soon will be.  Advances in
nanomanufacturing now allow most population centers to generate their
own food, water, and power, regardless of any efforts to block shipment
by corrupt governments or hostile neighbors.  In fact, these advances
have directly caused the essential termination of those governments and
hostilities, as measured by their reduced tendency to provoke conflicts.
     Further contributing to our relative peace is the ability to
quickly download someone else's entire chain of reasoning, if that
someone else makes it available for download.  Most people find it far
easier to highlight points of disagreement by exchanging these chains
than to convey their arguments in words, since there is almost no
miscommunication when exchanging thought itself.  This has caused a
significant decrease in the type of misunderstandings that lead to
violence.
     Although money and certain rare physical resources still play a
part in our lives, most people work more for fame and credit than for
money.  The cost of living almost everywhere has dropped to where the
old minimum wage laws - which have not been updated since 2023, even to
account for inflation - now guarantee the necessities of life to anyone
willing to work just 10 hours a week.  Unprecedented levels of
investment, made possible by the money most people earn by working more
than 10 hours a week, ensure that there are enough jobs for everyone.
     It is true that we do not live in a utopia.  For instance, not
everyone has the ability to download thought.  Some people have
physically damaged brains, which the generic map does not fit, but there
are ways to rig the map to fit these individuals.  Some people choose
not to have the modules installed, and while a few of these have
developed alternate ways of coping - which, once understood, tend to be
incorporated into the devices and distributed as a module upgrade - most
of these find themselves unable to function in today's society.  These
people are, as a rule, unemployable, ignorant, and occasionally
dangerous.  For example, the ecoterrorists described above would act
much the same, with the same results, if they found themselves in
today's world.
     One of the current great philosophical debates is whether to force
the modules on these people.  While it is obvious to us what the
benefits to them - and the rest of society - would be, most of them have
not actively done any wrong that would warrant coercion.  We all benefit
when one of them, not having the same set of ideas most of us share,
comes up with an improvement on our processes.  The only module on which
there is no debate is the one that teaches how to learn and adapt to
new situations; while this skill can be picked up even without
downloading, those who do not, or refuse to, know it are universally
worse off.  Some countries are considering legislation to compel
installation of a module if these skills are not known by the local age
of adulthood, though public sentiment against these laws makes it
unlikely they will be passed at this time.
     Which leads me back to my original point.  With the ability to flow
more freely from field to field, most people choose to be jacks of all
trades, taking whatever career path they feel like, pursuing it for a
number of years, and switching when they grow bored.  This is certainly
a useful way of life, and ensures that we never run too low of any one
talent for long.  Whenever we do, someone always comes up with a module
for that talent if one does not yet exist.  However, I feel that I might
possibly be able to contribute more by devoting my life - or at least
possibly the next several centuries, if the immortalists are correct -
to but one pursuit.  I will, of course, contribute any modules for ideas
and skills I develop on the way, so that others may skip up to my level,
but I will not change from my course.
     You may find this amusing.  With all the changes that we have seen
recently, and the ever-increasing rate of changes, how will I know that
whatever occupation I choose will not soon become obsolete?  I
acknowledge this risk, and if it does, then I will switch, just like
anyone else.  But I want to take one field, one area of expertise, and
see if I can refine it until it can be logically proven that no higher
level of mastery is possible.  Throughout human history, there have been
those who were the best at their time, and possibly the best yet, past
or present.  I want to be the best in something for all time, or at
least the first who was the best: anyone else who needs to be good in
that area can then copy a module I will create to communicate my
experience.  Maybe, if I succeed, I can change to another field, or
maybe my success will open up some completely new field.  I honestly
doubt that, even given several times the universe's age, I will be able
to master all that can be mastered, nor would all of us combined if we
tried.  All signs are that there are an infinite number of skills out
there that we can learn.
     The field I choose first is...


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:26 MST