Re: Economist essay contest on 2050

From: Adrian Tymes (wingcat@pacbell.net)
Date: Sun Jul 30 2000 - 15:53:39 MDT


Eirikur Hallgrimsson wrote:
> On Tue, 11 Jul 2000, Adrian Tymes wrote:
> Question for those who decide not to write an essay: is anyone willing
> > to pre-read essays for those of us who do submit one?
>
> Count me in, or even post 'em. It's not off topic and I'm sure
> comments would be plentiful (and we can hope, mostly constructive).

Alright. Here's draft 1 of my entry. It's probably a piece of crap -
but then, it *is* draft 1; only the final draft has to work. (If I get
enough comments to do significant changes, I'll probably post draft 2
next weekend. Of course, the final draft has to go in before 8/14...)

---
"Specialist" is such a dirty word.  I prefer "focused", because that is
what I am guilty of: focusing on one subject.
Although this may seem unfashionable at the midpoint of the 21st
century, I assure you that it is a practice that was quite popular up
until recently.  Indeed, until the end of the 20th century, it was
possible for people even in the most technologically advanced nations to
learn only a few basic facts about how the world worked, stop learning
at around age 18, and live full and productive lives.  Only the elite -
scientists, engineers, chief politicians and businessmen, and a few
others - needed much more information to do their jobs.
Even then, some people advocated lifelong learning as a path to a fuller
life, but this movement never really caught on until recently.
Speculation abounds as to the reasons, but I doubt that any of the
theories will ever be proven to most peoples' satisfaction.  What has
been proven is that, once the masses were forced to choose between
constantly improving their minds and living in relative poverty, and
made aware that both paths were available, most chose to keep learning.
Many factors contributed to this choice coming about.  First was the
rate of increase of knowledge available to humanity.  One could be
forgiven for assuming that the Information Revolution, fueled by an
ever-increasing amount of computational power starting in the latter
half of the 20th century, was the only cause.  Equally important, but
not sufficient on its own, was the rising human population, which meant
there were more people to create or discover new ideas.  Computer
networks merely allowed these ideas to be communicated easily enough
that most people became aware of the amount of knowledge that was out
there, where they had formerly assumed there were no answers to most of
their questions.
Early attempts to learn all this information lead to a phenomenon known
as "information overload".  It was like a perpetually starving band of
nomads suddenly arriving at an all-you-can-eat, infinitely stocked
buffet.  With that metaphor, it should come as no suprise that most
overfed and became proverbially sick: unable to focus their thoughts,
despairing at their ability to handle new information at all, and so
forth.  A few, either nearing this sickness or seeing many of their
comrades fall to it, looked for ways to better handle this glut.
Various methods of managing this information were tried, most
essentially just predigesting the meal at the expense of having direct
access to some details (or, to further the analogy, at the cost of some
nutrients).  But these were all limited by the modes of communication to
the end destination: the human consciousness.
Which lead to the second factor: cybernetics.  This term used to only
refer to certain types of control systems, but sometime around the 1980s
(depending on how one measures this), it acquired a pop culture
definition of direct neural interface between biological (especially
human) nervous systems and any man-made machine (especially computers
and computer-controlled devices).  This definition has now forced its
way into mainstream academic use as well, despite the efforts of various
parties to give the field a more accurate name.
Experiments in cybernetics date back to ancient times, depending on how
loosely one uses the term, but the field only really got going in the
1990s with the commercial introduction of chips designed to stimulate a
neuron depending on signals from some electronic device, or vice versa.
These devices were originally intended to restore sensory and control
connections to people who had lost one or more limbs and replaced them
with prosthetics, to allow them to use these limbs as they might
otherwise have used their natural ones.
The application of this to help people understand and make use of
information might seem obvious to us, but then, it is everywhere in our
society.  At the time, various historical disasters resulting from
attempts to improve humanity (for instance, the eugenics movement that
peaked in the 1930s and 1940s) caused almost an embarrassed silence
whenever the subject of further attempts at improvement was brought up.
Further, the aforementioned information overload, combined with
widespread jealousy against those who were handling information well
enough to reap huge profits (financial, emotional, spiritual, and
otherwise), caused a popular backlash against any new technologies.
However, this backlash was indiscriminate and uncompromising enough to
cause a backlash of its own.
Perhaps the best example is the so-called "ecoterrorists" who raided a
US government laboratory that was deliberately poisoning trees in order
to test possible cures for anthrax-b, in a race against time to save the
14001st Armored Battalion, who had been exposed to it in the process of
preventing the Golden Jihad movement from "purifying" India of their
enemies.  The 141st was visiting on the night of the raid, preparing to
go into quarantine if the anthrax-b could not be cured before it became
contagious.  When the raiders identified themselves, brandished guns,
and asked everyone in the lab to leave so as not to be hurt when they
burned the lab down, the 14001st killed them in a brief firefight.  A
show trial absolved the 14001st of murder charges on the grounds that
they were protecting their only hope for life, therefore to defend the
laboratory was to defend their own lives.  The ramifications of this
legal precedent, and similar ones in the EU, Japan, Russia, Australia,
Chile, South Africa, and other countries, are still being debated by
legal scholars.
However, the immediate political effect was to spark growing
anti-anti-technology resentment, lending political support to all kinds
of research explicitly designed to improve humanity.  This lead directly
to the EU commissioning the third enabling factor: the Human Brain
Translation Project.  In the fashion of the earlier Human Genome
Project, this was designed to create a generic map of the human brain
and what patterns of neural impulses mapped to what thoughts, in a
manner that would allow implanted devices to send and read ideas,
memories, and sensations to and from the human mind.  As with the Human
Genome Project, a decent amount of this project's work had already been
done.  It was known, for instance, that only a generic map could be
created, for each brain differed in some details from every other brain.
However, cybernetic device makers' widespread use of "training scripts",
which adapted each individual device to its user over time, meant that
only a generic map (which these scripts could use as a starting point)
was needed.
Even before the project announced official completion, products were
already hitting the market that allowed people to download knowledge,
storing information that was not currently in use and prompting the mind
with ideas relevant to whatever was currently being contemplated.
Practically all versions of this device were made modular, so that only
the initial device needed any surgery; upgrades could be delivered
either via signals transmitted by induced current or, once someone
figured out how to make plugs that could breach the skin without risk of
infection, direct cable link.  However, all of these versions had the
same module installed by default, which gave users the basics of logic
and how to learn new information.  It is widely suspected that the
vendors all copied the same module, but there are no records of who
might have written it, and no one has taken credit.
This was hailed at the time as the start of a new golden era of science.
Critics pointed out numerous problems with this idea, most notably the
fact that utopia had not yet arrived.  However, it should be pointed out
that, as has been the case every decade since at least the 1990s, we now
live in an era of unprecedented wealth and enlightenment, and all signs
are that the next decade will be even better.  People are living longer,
healthier, and more productive lives; some figures indicate that the
average expected life of a human is growing about one year every year,
leading many to ponder whether we are now practically immortal or soon
will be.  Advances in nanomanufacturing now allow most population
centers to generate their own food, water, and power, regardless of any
efforts to block shipment by corrupt governments or hostile neighbors,
which has lead to the effective termination of many of those governments
and hostilities.  Further contributing to our relative peace is the
ability to quickly download someone else's entire chain of reasoning, if
that someone else makes it available for download: most people find it
far easier to highlight points of disagreement by exchanging these
chains rather than beat a point into someone else's head with a club,
since there is almost no miscommunication when exchanging thought
itself.  Although money and certain rare physical resources still play a
part in our lives, most people work more for fame and credit than for
money, since the cost of living almost everywhere has dropped to where
the old minimum wage laws (which have not been updated since 2023, even
to account for inflation) now guarantee the necessities of life to
anyone willing to work but 10 hours a week, and unprecedented levels of
investment, made possible by the money most people earn by working more
than that, ensure that there are enough jobs for everyone.  If this is
not a golden era, what is?
It is true that we do not live in a utopia.  For instance, not everyone
has the ability to download thought.  Some people have physically
damaged brains, which the generic map does not fit, but there are ways
to rig the map to fit these individuals.  Some people choose not to have
the modules installed, and while a few of these have developed alternate
ways of coping (which, once understood, tend to be incorporated into the
devices and distributed as part of each year's module upgrades), most
find themselves unable to function in today's society.  They are, as a
rule, unemployable, ignorant, and occasionally dangerous (see the
"ecoterrorists" above; their actions and results would be the same if
they found themselves in today's world).  One of the great philosophical
debates currently is whether to force the modules on these people: the
benefits to them and us are obvious, but most of them have not actively
done any wrong that would warrant coercion, and we all benefit when one
of them, not having the same set of ideas most of us share, comes up
with an improvement on our processes.  The only module there is no
debate on is the one that teaches how to learn; while this can easily be
learned even without downloading, those who do not, or refuse to, learn
it are universally worse off.  Some countries are considering
legislation to comp installation of a module if these skills are not
known by the local age of adulthood, though public sentiment against
these laws makes it unlikely they will be passed at this time.
Which leads me back to my original point.  With the ability to flow
freely from topic to topic, most people choose to be jacks of all
trades, taking whatever career they feel like, pursuing it for a number
of years, and switching when they grow bored.  While this is certainly a
useful way of life, and ensures that we never run too low of any one
talent for long (whenever we do, someone always comes up with a module
for that talent if one does not yet exist), I feel that I might possibly
be able to contribute more by devoting my life (or at least the next
several centuries, if the immortalists are correct) to but one pursuit.
I will, of course, contribute any modules for ideas and skills I develop
on the way, so that others may skip up to my level without repeating my
mistakes, but I myself will not change from my course.
You may find this amusing.  With all the changes that we have seen
recently, and the ever-increasing rate of changes, how will I know that
whatever profession I choose will not soon become obsolete?  I
acknowledge this risk, and if it does, then I will switch, just like
anyone else.  But I wish to take one field, one area of expertise, and
see if I can refine it until it can be logically proven that no higher
level of mastery is possible.  Throughout human history, there have been
those who were the best at their time, and possibly the best yet, past
or present.  I want to be the best in something for all time, or at
least the first who was the best: anyone else who needs to be good in
that area can copy me.  Maybe, if I succeed, I can change to another, or
maybe my success will open up some completely new field.  I honestly
doubt that, even given several times the universe's age, I will be able
to master all that can be mastered, nor would all of us combined if we
tried; all signs are that there are an infinite number of skills out
there that we can learn.
The field I choose first is...


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:14 MST